Category Archives: Governance

Death in Hamburg: Society and Politics in the Cholera Years, 1830-1910 by Richard Evans

6/10. Death in Hamburg: Society and Politics in the Cholera Years, 1830-1910 by Richard Evans

This book promises much but ultimately rather disappoints, largely because of its tendency to lose focus, sprawling into this of that side-avenue. Partly this must be due to a lack of clarity as to what the book is about — an impression strongly reinforced by the book’s afterword which does much to illuminate the intentions and the author.

Is this a narrative history? An analytic investigation of the of public health provision, focusing especially on the 1892 epidemic? A wide-ranging overview of Hamburg society and the mentality of its dominant classes, a marxist-influenced study of class tension and conflict or …? The author does not seem to be sure. The result is rather a mish-mash.

At some points we seem to be investigating the political and social reasons for Hamburg’s poor public health outcomes, in particular the constant fighting between the different ‘fractions of capital’ (in particular the merchant/lawyer senators and the property-owners) over the provision of public goods, at others having a detailed description of working class living conditions, and at another a history of medical approaches to cholera and other diseases in the 19th century, and at another describing in detail how the ‘dominant’ classes used charitable support both in general, and after the 1892 epidemic, to exercise social and moral ‘control’.

Of course, it is possible these different approaches and angles could have been woven together to produce a single rich and compelling whole. But this is not so. To take the main focus of the book — which I take to be the Cholera epidemic of 1892 together with its causes and outcomes. By the time I had finished the more than 700 pages I was still unsure as to what, in Professor Evan’s view, were the main reasons for Hamburg’s terrible performance in comparison with other German (or European) cities. To pick just a few of the possible ones:

  1. The failure to develop sand-filtration for the public water supply. Was this in turn due to:
    • The form of the Citizens’ Assembly, in particular the ability of the property owners to block improvements that might result in reductions in their profits.
    • Early investment in a new water system which then made it relatively more costly to upgrade later (Hamburg was one of the first cities in Germany to develop an external resevoir).
    • Ideological opposition (see next items)
  2. The ideological commitment of Hamburg’s ruling groups to ‘Trade’ and ‘Laissez Faire’
    • Reinforced, perhaps, by direct self-interest in the case of ship-owners and others for whom quarantine meant serious disturbance to their work or enterprise
  3. The inefficient governance structures (in particular the operation and make-up of the Senate and Burgomaster)
    • Hamburg’s governance compared particularly poorly with the more efficient, though also more, authoritarian action of the Imperial government (particularly that of the Imperial Health Office and Koch).
  4. Continuing support in medical circles (and in administrative positions) for ‘miasmatist’ rather than ‘contagionist’ theories of disease (especially in relation to Cholera)
  5. The inadequate living conditions of the poor especially in the ‘Alley Quarters’.
  6. Incorrect medical treatment either due to lack of medical knowledge or incompetence.
  7. The (in)ability of different socio-economic groups to follow the medical instructions provided — whether because of wealth (e.g. ‘rich’: able to have their servant boil all their water, ‘poor’: unable to resist the fruit which is suddenly cheaply available because normally denied it), literacy (can one read the instructions distributed), respect for ‘authority’, etc.

One would not expect to have a single explanation put forward but it would be useful to have some indication of which of these items were the more important, particularly where different reasons are substitutes not complements. For example, at several points Evans appears to indicate that the water-supply was the single biggest determinant of death by far (he cites a particularly illuminating comparison of a set of apartments that drew its water from two different sources). But if this is so then almost all of the focus should be on the water-supply question and why this public good was not present in Hamburg when it was elsewhere. No doubt, in answering this, one will be lead onto many of the other items as secondary causes but it an important step will have been made in stratifying, and thereby clarifying, the analysis. Furthermore, from this perspective an explicit comparative analysis with other localities becomes essential. While Evans does perform this to some extent, it is largely in terms of the behaviour of the localities in 1892 (e.g. re. imposition of quarantine) rather than the more important investigation of why those localities had sand filtration while Hamburg did not — in particular why had they found the political will to provide this important public good while Hamburg had not? In particular, why were the property-owners in Bremen, Berlin and elsewhere not able to block these same kinds of public infrastructure projects?

Once lead down this route the reader must be increasingly concerned about the weight, and attention, Evans focuses upon socio-ideological explanations (made particularly noticeable by the frequent intrusion of Marxist historiographical language and approach — an influence made explicit in the afterword). As Evans acknowledges in respect of most other disease outcomes Hamburg did little worse than elsewhere in Germany. If this is so how much does the 1892 epidemic really tell us about the society and politics of Hamburg (and vice-versa)? Perhaps if Hamburg had not invested early in its water supply, it would have had an ‘out-of-date’ one by 1892? Perhaps if Veresman had been Burgomaster more rapid and effective steps would have been taken early on that would have dramatically reduced the impact? Perhaps if Hamburg had been more authoritarian (rather than more democratic) the Senate would have been able to improve the water-supply earlier?

This brings me on to my final comment. The contemporary relevance of the book is emphasized in several places, for example on the back-jacket text and in several of the blurbs — Gordon Craig’s NYRB review extract quoted on the cover reads “… about the contemporary relevance of this book there can be no question”. Of course, we should allow for the fact that this was published in 1987 when the AIDS epidemic was receiving very widespread attention. But one does need to ask exactly what one does learn from this book regarding public health? That we should invest in public goods projects? That it is good for medical science to be accurate and correct? That one should respond rapidly to an outbreak of a contagious disease?

Surely the answer to all of these is yes. The devil, of course, is in the detail. how do we trade off the benefits of rapid and sharp response, which is likely to involve sharply restricting movement of persons and goods, against the costs of such restrictions both socially and commercially? What institutional structures will result in adequate investment in public goods and rapid response to public concerns? Are there tensions between responsiveness to concerns (e.g. via full representative government!) and effectiveness in action (which might necessitate a single executive office with significant power and autonomy)? Finally, if the answers to these questions are reasonably obvious (e.g. its Democracy stupid!) then what prevents a polity, whether today or in the 19th century, from acting in the correct way? (Answer: entrenched powers and vested interests — but how did these come into being and how are they overcome?).

The test then of Evans’ book is whether it supplies us with interesting answers to these, more nuanced, questions. In this regard the book, I feel, comes up short. Without a comparative analysis at the social, and more importantly, political level in other German (or European) cities how can we know whether Hamburg’s terrible experience was the result of a common generalizable pattern or mere historical accident?

In sum this an interesting book albeit a little lengthy and heavy-going in places. Confused as to its structure and purpose it largely fails to deliver on its promise to answer the main question posed on its jacket: “Why were nearly 10000 people killed in six weeks in Hamburg while most of Europe was left almost unscathed?” As such it is also limited in the light it can throw on public health problems today. Nevertheless the reader will have been left with a wide-ranging coverage of a whole variety of 19th century topics, most significantly the two items explicitly mentioned in the title: Hamburg and Cholera.

On the Optimal Size of Nations, Organizations etc

This is a topic I’ve thought about quite a bit before12 but on this particular occasion it arose from a discussion with a friend about the size of Cambridge colleges and the growth of the EU.

Having (many smaller) different competing organizations rather than (fewer/one bigger) organization is:

  • Good because of variation: the average rate of improvement is proportional to the variance in current quality (cf. Fisher’s theorem for natural selection).
  • Bad because lack of standardization means fewer economies of scale and scope and higher transaction costs (e.g. for trade) and greater free-rider issues across jurisdictions/organizations.
  • Good because smaller means more responsive to the preferences of participants and fewer transaction costs (e.g. in monitoring and voting) and fewer free-rider problems within jurisdictions/organizations.
  • Good because humans ‘feel’ better (more autonomous, more in control of their lives, understand better what is happening to them) within jurisdictions/organizations of a smaller size.

One interesting point is that simple prisoner dilemma type arguments arising from inter-state conflict (whether over territory or over the inter-country rules of the game for things like trade — consider the US vs. Burkina Faso in a bilateral trade negotiation) would imply that there’s is an escalation effect in country size (if you get bigger I want to get bigger). This would then imply that countries are larger than they should be for simple (i.e. ignoring fights with others) welfare maximization for citizens (cf. Christopher Alexander’s pattern for states suggesting a size of between 5-15 million citizens with the fact that almost all states are significantly larger than this).

[Later] The very day of writing this I came across a review of Alesina and Spolaore, The Size of Nations, while idly flicking through old JEL (March 2005) reviews. It appears from the review to make several similar points (not exactly surprising given how the ideas themselves are fairly obvious …):

“The nation-state is defined in terms of a monopoly of coercion and the legal use of force within its boundaries [ed: Weber’s definition]. The central tenet of the book is that the size of nations is determined by a trade-off between the benefits of economies of scale in providing public goods (e.g. defense) and the costs of heterogeneity in preferences over the provision of these public goods.”

“This raises the question why, instead of a unified nation-state, we do not observe a series of overlapping jurisdictions that best resolve this trade-off for individual public-goods. The authors argue convincingly that such a configuration would face prohibitive transaction costs and fail to internalize economies of scope. The nation-state monopolizes the provision of essential public goods (law and defense) and adopts a host of other function because economies of scope and transactions costs. Some functions are delegated to subnational levels of government, but subnational jurisdictions do not cross national borders.” [from Redding’s review, p.161]

  1. One particular early idea was whether one could ‘generalize’ some parallel computing ‘results’ such as Amdahl’s law to organizations. The diminishing returns of Amdahl’s law occur because only some portion of the program is parallelizable and hence adding more processors provides ever less speedup as one approachs the basic speed constraint set by the remaining serial part of the program. This problem will be made even worse if there is a need for intercommnication between processors (e.g. due to some part of the program having sequential dependencies as would the be the case where there are set of parallelizable tasks that need to be performed sequentially). Thus, while more processors allow for greater application of the divide-and-conquer effect (and therefore specialization and economies of scale) they may require greater transaction costs in terms of communication/synchronization between the processors. At some point the transaction costs become larger than the divide-and-conquer benefits and the system becomes slower. (This also has connections to the transaction costs theory of the firm, though, there the comparison is not between economies of scale and transaction costs but between transaction costs within the firm versus those outside of the firm — in the market.) 

  2. A second point was on the empirical evidence on optimal size of such entities. One often hears comments about 5-12 being the optimal size for a team or 300-500 being the optimal size for a community (see e.g. Alexander et al’s A Pattern Language) but I’ve never yet come across (though I haven’t looked that hard) firm evidence on which these figures are based. 

Machiavelli on the Values of Foresight and Prompt Action in Statecraft as in Medicine

[After describing how the Romans behaved towards other powers both large and small] Because the Romans did in these instances what all prudent princes ought to do — taking care to concern themselves not only with present troubles, but also future ones. For these they prepared with every effort, because, when distant, it is easy to forestall them; but when left until near they are impossible to prevent. It happens then as it does to physicians in the treatment of of consumption, which in the commencement is easy to cure and difficult to understand; but when it has neither been discovered in time nor treated upon a proper principle, becomes easy to understand and difficult to cure. The same thing happens in state affairs; by foreseeing them at a distance, which is only done by men of wisdom , the evils which might arise from them are soon cured; but when, from want of foresight, they are suffered to increase to such a degree they are perceptible to everyone, there is no longer a remedy.

— Machiavelli, The Prince (1513) Gutenberg version

Parting the Waters: America in the King Years 1954-1963

I have just finished reading Taylor Branch’s Parting the Waters: America in the King Years 1954-63, the first volume of a trilogy. It is a fascinating book and fittingly for such a sprawling multi-faceted panorama the two most important themes were tangential ones. The first is the degree to which the perception of events, at least as presented in the press or comprehended by those at a distance, is removed from the actual reality. The second is the extent of the abuses committed by the FBI under Hoover (and the collusion of politicians in this corruption of democracy due to the power that Hoover wielded). The genesis of these abuses, and the way in which they continued unchecked for so long, is a salutary warning of the need to be ever wary when claims of national security are used to prevent the monitoring of the activities of the government by the public at large.

From the standpoint of personal injury to King, Robert Kennedy did perhaps his greatest disservice by remaining a caretaker Attorney General for another ten months, when the FBI ran unchecked.

The Bureau wasted no time describing its target as “King’s unholy alliance with the Communist Party, USA,” and King as “an unprincipled opportunistic individual.” Sullivan summoned Agent Nichols and others to Washington for a nine-hour war council, the result of which was a six-point plan to “expose King as an immoral opportunist who is not a sincere person but is exploiting the racial situation for personal gain.” All the top officials signed a ringing declaration of resolve laced with the usual pledges to proceed “without embarassment to the Bureau.” The underlying hostility did not make the officials that unusual among Americans of their station. Nor was it unusual that an odd man such as Hoover would run aground in his obsession with normalcy. Race, like power, blinds before it corrupts, and Hoover saw not a shred of merit in either King or Levison. Most unforgivable was that a nation founded on Madisonian principles allowed secret police powers to accrue over forty years, until real and imagined heresies alike could be punished by methods less open to correction than the Salem witch trials. The hidden spectacle was the more grotesque because King and Levison both in fact were the rarest heroes of freedom, but the undercover state persecution would have violated democratic principles even if they had been common thieves. [p. 919, emphasis added]

What Financial Trading Systems Tell Us About Markets

From Hans Stoll, Electronic Trading in Stock Markets, JEP, Winter 2006, 20:1.


1. Moving towards fully electronic market. NYSE has just merged with Archipelago and NYSE with Instinet.

2. Economies of scale and their affect on firm size and organization:

“Technology has changed the nature of the specialist in other ways, too. In 1975, 381 individual specialists owned seats and operated 67 specialist firms organized primarily in partnerships. Today, the number of individuals acting as specialists remains about the same, but they are organized into only seven specialist firms, structured as corporations. The consolidation of dealer firms on the NYSE (and also in other markets) reflects the economies of scale in the dealer business.” [p. 157]

Rmk: Might it also reflect the consequences of competition which has spurred the creation of a more oligopolistic market in order to preserve rents:

The bid-ask spread is a measure of the cost of immediacy, a term coined by Demsetz (1968). Dealers supply immediacy and earn the spread, while demanders of immediacy pay the spread. The spread compensates dealers for the risks they assume in buying (or selling) a stock that someone else is anxious to sell (or buy), as well as for the costs of processing the trade. The spread may also reflect monopoly rents if dealers have market power. [p. 156, emphasis added]

Illegal Activity by Specialists

Specialists are frequently criticized for the conflict inherent in their dual roles as brokers for the orders left with them to be entered in the book and as dealers trading for their own accounts. …

The Securities and Exchange Commission’s most recent investigation of specialists found that certain individual specialists traded ahead of customer orders in violation of the negative obligation not to trade “unless reasonably necessary”. The firms paid a total of $242 million to settle the charges against them (SEC, 2004). On April 12 2005, 15 individual specialists who worked for the firms were indicted, and the NYSE paid a fine of $20 million for failure to regulate the specialists properly. … [ed: goes on to list the details of these violations such as trading ahead of customer orders and interpositioning.]

The source of these problems is that the execution of trades is not automatic. The specialist has discretion … [pp. 158-159]

But …

Evidence of Collusive Behaviour in Nasdaq

Stoll states, pp.159-160:

In view of the number of competing market makers [2 or more compared to NYSE 1] and the apparent high-tech and transparent nature of the Nasdaq market, it might seem that transactions costs would be reduced with better prices for investors. However, all was not well. Several academic studies and investigations by the SEC and the Justice Department revealed dealer behavior that articifially raised bid-ask spreads above competitive levels (see in particular Christie and Schultz, 1994, 1995, Christie, Harris and Schultz, 1994, Huang and Stoll 1996). The source of the problem was that in a pure dealer market, which Nasdaq resembled at that time, customers must trade at dealer quotes. …. Consequently the bid-ask spread was determined by dealers without the possibility of competition from customers.

For several reasons, dealers did not compete effectively among themselves to narrow spreads. First, as Christie and Schultz (1994) emphasize, dealers seemed to coordinate their quotation patterns by only using price fractions that ended in even-eights … which meant spreads were at least $0.25. Second the practice of “preferencing” assured dealers they would receive order flow whether or not they quoted the best price. … Since much of the order flow in a stock was preferenced, a given dealer could not attract much additional order flow by improving quotes. Instead, business for all dealers (since each dealer committed to trade at the best quote). Consequently, the benefit to a dealer or improving the quote was low, and as a result, dealers limited their competition on quotes. Third while quote competition was limited on the public Nasdaq market, dealers did offer competitive quotes over interdealer trading systems in order to manage their inventory. These quotes were not available to the general public, however. The net effect of these factors was to raise the bid-ask spread above the competitive level, especially for smaller trades. Huang and Stoll (1996), for example, show that spreads in Nasdaq stocks in the early 1990s were twice those of comparable NYSE stocks. [pp. 159-160, emphasis added]


Given that the financial trading systems discussed above are, in many respects, the apotheosis of the market based exchange systems it is interesting to consider:

  1. The distance the markets are from textbook perfect competition (7 dealing firms on the NYSE, 1 market-maker per stock etc). It is also yet another example of the empirical bankrupty of the contestable market hypothesis.
  2. The potential for monopolistic or oligopolistic abuse was intensified by the ability to set institutional rules (especially on Nasdaq) along with market features conducive to such behaviour in the form of frequent and repeated interaction among participants (all of which facilitates abuse).
  3. Significant abuse did occur with substantial costs to almost all participants
  4. The crucial role played by external official (SEC, Justice Department) and unofficial (academics) regulators in correcting and sanctioning these abuses and promoting the overall efficiency of the market.1

To put it most starkly I would summarize the implications for policy as: efficient markets cannot exist without regulation

  1. “In 1993 the quoted half-spread on Microsoft exceeded 16 cents. By the end of 2001, the quoted half-spread had declined to under 2 cents. On Nasdaq, the most dramatic decline occurred as result of a May 1994 meeting of the Nasdaq administration with dealers to urge a reduction in spreads. The meeting was called in response to the Christie and Schultz (1994) paper and the emerging controversy over Nasdaq dealer behavior.” 

Politically, IP is where the Environmental movement was 30 years ago

This speech was delivered in my capacity as Director of FFII-UK in the “IP and the Knowledge Commons – Political Parties” panel of the TACD conference on The Politics and Ideology of Intellectual Property, which took place in Brussels on the 20-21 March 2006.

Politically, IP is where the Environmental movement was 30 years ago

I always prefer discussion and questions so I’m going to keep my formal presentation very short. In keeping it short I’m also going to restrict myself to telling you one, well maybe, two things.

The first is that, at present, when it comes to intellectual property there are no political parties. That is there are no, or very few, discernible ideological differences between political grouping on intellectual property (and on innovation policy in general). If you look at other areas: labour law, monetary policy, etc you will see clear differences between political parties. In advance you can predict with a fair degree of confidence which way a party or grouping will go. But when it comes to intellectual property that really isn’t the case.

What this means is that when it comes to voting either all parties tend to go the same way or all parties split. The most extreme case I know of this second situation was the first reading of the European Parliament on software patents when practically all of the major groups split, in some cases not only at the supra-national but at the national level.

Why is this? I think the answer is fairly simple. For intellectual property there are, very, very roughly, two sets of interests. On one side you have rightsholders. On the other side you have the general public. Rights-holders see direct gains from extensions of IP be they in term or scope. The general public by contrast while they may see benefits from extensions in the form of increased innovation, also bear the costs.

At the same time rightsholders are generally very concentrated while the general public is diffuse and poorly organized — so much so that many significant changes pass virtually unnoticed.

This means that:

  1. much of the time the debate is very one-sided. Because it is one-sided there is no need for parties to take ‘sides’
  2. the interests that do exist are very broad-based and tend to impinge equally across the political spectrum

This brings me to my second point which comes in the form of an analogy. It is the analogy between debates over innovation and intellectual property and those over environmental issues. I believe that where we stand today politically with respect to innovation policy and IP is where we stood with respect to environmental issues 30 or 40 years ago.

Just like innovation policy, with environmental issues you see concentrated interests pitted against those of the general public. There also you have growing political engagement as a result of significant external changes. And just like with the environment 40 years ago we are just beginning to build a movement to properly represent the public on the relevant issues.

Today when you look at a political party it will have a position on the environment and we even have ‘Green’ parties — albeit generally small ones — specifically focused on those issues. We’ve even got to the stage where, in the UK, when the Conservatives (which is a right wing party) took out full-page ads to announce its new policy agenda one of the five bullet points was about the environment.

Similarly I think 30 years from now innovation policy will have the same prominence. All political parties will have positions on these kinds of issues, and not buried away somewhere in their manifesto but the kind of things they mention when they take out those full-page ads.

There will also be a much, much fuller civil society engagement in these issues. Just think of Greenpeace, Friends of the Earth, Conservation International now and how they were 40 years ago (if they even existed). In 40 years I believe we’ll see organizations on the same kind of scale and with the same level of membership in the area of innovation policy.

This won’t mean no IP, quite the contrary, but I do believe it will mean a far better balance in the way we use and regulate IP, and that ladies and gentleman can only be a good thing.

Thank you.

Coase on Being Misunderstood

The world of zero transaction costs has often been described as a Coasian world. Nothing could be further from the truth. It is the world of modern economic theory, one which I was hoping to persuade economists to leave.

Ronald Coase, The Firm, The Market and The Law (Univ. of Chicago Press, Chicago, 1988), p. 174.

This is a point I often make to people, especially those doing Law and Economics who seem fixated on his 1960 JLE paper and the ‘Coase Theorem’ (assignment of property rights does not matter as bargaining will ensure the efficient outcome). The irony for anyone who reads the actual article is that that particular point is made briefly at the start and is there only to lead in to the main question: how should we set things up when bargaining is not possible.

Parkinson’s Laws and Painting the Bikeshed

Law Number 1


General recognition of this fact is shown in the proverbial phrase ‘It is the busiest man who has time to spare.’ Thus, an elderly lady of leisure can spend the entire day in writing and dispatching a postcard to her niece at Bognor Regis. An hour will be spent finding the postcard, another in hunting for spectacles, half an hour in a search for the address, an hour and a quarter in composition, and twenty minutes in deciding whether or not to take an umbrella when going to the pillar box in the next street. The total effort that would occupy a busy man for three minutes all told may in this fashion leave another person prostrate after a day of doubt, anxiety, and toil.

Granted that work (and especially paperwork) is thus elastic in its demands on time, it is manifest that there need be little or no relationship between the work to be done and the size of the staff to which it may be assigned. A lack of real activity does not, of necessity, result in leisure. A lack of occupation is not necessarily revealed by a manifest idleness. The thing to be done swells in importance and complexity in a direct ratio with the time to be spent. This fact is widely recognized, but less attention has been paid to its wider implications, more especially in the field of public administration. Politicians and taxpayers have assumed (with occasional phases of doubt) that a rising total in the number of civil servants must reflect a growing volume of work to be done. Cynics, in questioning this belief, have imagined that the multiplication of officials must have left some of them idle or all of them able to work for shorter hours. But this is a matter in which faith and doubt seem equally misplaced. The fact is that the number of the officials and the quantity of the work are not related to each other at all. The rise in the total of those employed is governed by Parkinson’s Law and would be much the same whether the volume of the work were to increase, diminish, or even disappear. The importance of Parkinson’s Law lies in the fact that it is a law of growth based upon an analysis of the factors by which that growth is controlled.

The validity of this recently discovered law must rest mainly on statistical proofs, which will follow. Of more interest to the general reader is the explanation of the factors underlying the general tendency to which this law gives definition. Omitting technicalities (which are numerous) we may distinguish at the outset two motive forces. They can be represented for the present purpose by two almost axiomatic statements, thus: (1) ‘An official wants to multiply subordinates, not rivals’ and (2) ‘Officials make work for each other.’

To comprehend Factor One, we must picture a civil servant, called A, who finds himself overworked. Whether this overwork is real or imaginary is immaterial, but we should observe, in passing, that A’s sensation (or illusion) might easily result from his own decreasing energy: a normal symptom of middle age. For this real or imagined overwork there are, broadly speaking, three possible remedies. He may resign; he may ask to halve the work with a colleague called B; he may demand the assistance of two subordinates, to be called C and D. There is probably no instance, however, in history of A choosing any but the third alternative. By resignation he would lose his pension rights. By having B appointed, on his own level in the hierarchy, he would merely bring in a rival for promotion to W’s vacancy when W (at long last) retires. So A would rather have C and D, junior men, below him. They will add to his consequence and, by dividing the work into two categories, as between C and D, he will have the merit of being the only man who comprehends them both. It is essential to realize at this point that C and D are, as it were, inseparable. To appoint C alone would have been impossible. Why? Because C, if by himself, would divide the work with A and so assume almost the equal status that has been refused in the first instance to B; a status the more emphasized if C is A’s only possible successor. Subordinates must thus number two or more, each being thus kept in order by fear of the other’s promotion. When C complains in turn of being overworked (as he certainly will) A will, with the concurrence of C, advise the appointment of two assistants to help C. But he can then avert internal friction only by advising the appointment of two more assistants to help D, whose position is much the same. With this recruitment of E, F, G and H the promotion of A is now practically certain.

Seven officials are now doing what one did before. This is where Factor Two comes into operation. For these seven make so much work for each other that all are fully occupied and A is actually working harder than ever. An incoming document may well come before each of them in turn. Official E decides that it falls within the province of F, who places a draft reply before C, who amends it drastically before consulting D, who asks G to deal with it. But G goes on leave at this point, handing the file over to H, who drafts a minute that is signed by D and returned to C, who revises his draft accordingly and lays the new version before A.

What does A do? He would have every excuse for signing the thing unread, for he has many other matters on his mind. Knowing now that he is to succeed W next year, he has to decide whether C or D should succeed to his own office. He had to agree to G’s going on leave even if not yet strictly entitled to it. He is worried whether H should not have gone instead, for reasons of health. He has looked pale recently – partly but not solely because of his domestic troubles. Then there is the business of F’s special increment of salary for the period of the conference and E’s application for transfer to the Ministry of Pensions. A has heard that D is in love with a married typist and that G and F are no longer on speaking terms – no-one seems to know why. So A might be tempted to sign C’s draft and have done with it. But A is a conscientious man. Beset as he is with problems created by his colleagues for themselves and for him – created by the mere fact of these officials’ existence – he is not the man to shirk his duty. He reads through the draft with care, deletes the fussy paragraphs added by C and H, and restores the thing to the form preferred in the first instance by the able (if quarrelsome) F. He corrects the English – none of these young men can write grammatically – and finally produces the same reply he would have written if officials C to H had never been born. Far more people have taken far longer to produce the same result. No-one has been idle. All have done their best. And it is late in the evening before A finally quits his office and begins the return journey to Ealing. The last of the office lights are being turned off in the gathering dusk that marks the end of another day’s administrative toil. Among the last to leave, A reflects with bowed shoulders and a wry smile that late hours, like grey hairs, are among the penalties of success.

C. Northcote Parkinson, Parkinson’s Law: The Pursuit of Progress, London, John Murray (1958)

Law Number 2

Law of Triviality: the time spent on any item of the agenda will be in inverse proportion to the sum involved

People who understand high finance are of two kinds: those who have vast fortunes or their own and those who have nothing at all. To the actual millionaire a million dollars is something real and comprehensible. To the applied mathematician and the lecturer in economics (assuming both to be practically starving) a million dollars is at least as real as a thousand, they having never possessed either sum. But the world is full of people who fall between these two categories, knowing nothing of millions but well accustomed to think in thousands, and it is of these that finance committees are mostly comprised. The result is a phenomenon that has often been observed but never yet invesitgated. It might be termed the Law of Triviality. Briefly stated, it means that the time spent on any item of the agenda will be in inverse proportion to the sum involved.

FFII Statement at WIPO IIM 13th April 2005

I authored the following in my capacity as Director of FFII-UK as the FFII statement at WIPO IIM on the Development Agenda.

Submission of the Foundation for a Free Information Infrastructure, WIPO IIM 11th to 13th April 2005

First, at the outset Mr Chairman we would like to congratulate you, as well as the distinguished Vice-Chair, on your election. We would also like to thank the WIPO secretariat and its member states for this opportunity to present our views to you today.

Mr Chairman, distinguished delegates, and others, the Foundation for a Free Information Infrastructure (FFII) is a non-profit association registered in several European countries, which is dedicated to the spread of data processing literacy. FFII supports the development of public information goods based on copyright, free competition, and open standards. More than 500 members, 1,200 companies and 75,000 supporters have entrusted the FFII to act as their voice in public policy questions concerning exclusion rights (intellectual property) in data processing.

We wish to be brief in our submission and will only emphasize a single point, and one already clearly raised in the submission by the Friends of Development to this meeting in which they stated:

“[para 37] … Norm-setting at the international level has been dominated by a paradigm that regards intellectual property rights as the only and unequivocally beneficial instrument to promote creative intellectual activity. Increased scope and levels for intellectual property protection thus often become ends in themselves in international negotiations, which have failed to take into account the need to promote and enhance access to knowledge and the results of innovation ….

These are views we strongly endorse. To us the approach of WIPO often brings to mind the maxim that for those who possess a hammer everything is a nail. While IP in the right circumstances can be beneficial, conversely in the wrong ones it is undoubtedly harmful.

For our constituents this is not just an abstract possibility but a concrete one. A primary purpose of our organization over the last several years has been to protect the European software industry from the threat of software patents. For we believe that patents on software hinder rather than help innovation as well as fundamentally undermining the creation of the free and open standards necessary to sustain our information infrastructures into the 21st century. Our view is not simply opinion but is backed by a large body of evidence, to give one example among many, Deustche bank wrote in a report of June last year that: ‘Stronger IP protection is not always better. Chances are that patents on software, common practice in the US and on the brink of being legalised in Europe, in fact stifle innovation.’

Yet without any basis in either theory or fact a variety of WIPO documents have uncritically endorsed more and stronger IP as beneficial for the software industry. For example WIPO’s publication ‘Intellectual Property: A Power Tool for Economic Growth’ uncompromisingly states in its preface: ‘This publication is written from a definite perspective — that IP is good.’ In our view this is simply not the case: IP is neither good nor bad but only a tool — in some cases the benefits of IP outweigh the costs and in other cases they will not, how could it be otherwise? Such pronouncements only serve to encourage the view that, for WIPO, increased IP rights become ends in themselves, even when such rights harm the public interest, reducing access to knowledge, limiting innovation, obstructing competition and imposing large costs that fall most heavily on countries least able to bear them.

We believe that a refocusing of WIPO’s mission towards greater balance in the use of IP as well as the use of alternative methods of fostering creativity and innovation can only enhance the prestige of this body. Moreover it will also, more importantly, vastly increase the benefits and reduce the costs for its members of the agreements reached here. Thank you for your attention.

Second Life as Metaverse

Second Life is a massively-multiplayer world developed by Linden Labs. Unlike many other MMGs there is no particular aim, rather the intent is to live in the world and add to it. Thus importantly it is the game’s participants that create and develop the universe they inhabit (its creators explicitly invoke the Metaverse of Neal Stephenson’s Snowcrash as a model).

MMG (massively multiplayer games) solve the central problem that current computer technology faces in creating interesting games: namely no decent AI. Without AI all the interesting parts of a ‘world’ have to lovingly crafted by hand. Thus while we can draw some lots of pretty stuff we are a) we are severely limited in the size and variety of the world’s artifacts and geography b) /very/ limited in the other entities that we can interact with.

Standard MMRPGs such as EverQuest address problem (b) in a limited way by using the games participants to populate their world. However one is still restricted by the fact that such participants must remain within the contours of the plot and the surrounding reality as well as by the need to provide backup computer generated entities (be it for the dull occupations in this online world or until the strong law of large numbers kicks in). Moreover this type of games fails to leverage the games own /participants/ to help create/extend the world (much). A game such as Second Life (there are several others that have gone down that route) takes this logical next step and allows both (a) and (b) to be addressed. The final step would be to integrate some kind of incentive mechanism though it should be noted that Second Life appears to demonstrate that is not strictly necessary to get the participants to contribute.