Category Archives: History

The Elusive Disappearance of Community

From Laslett ‘Phillipe Ariès and “La Famille”‘ p.83 (quoted in Eisenstein, p.131):

The actual reality, the tangible quality of community life in earlier towns or villages … is puzzling … and only too susceptible to sentimentalisation. People seem to want to believe that there was a time when every one belonged to an active, supportive local society, providing a palpable framework for everyday life. But we find that the phenomenon itself and its passing — if that is what, in fact happened– perpetually elude our grasp.

Estimating Information Production and the Size of the Public Domain

Here we’re going to look at using library catalogue data as a source for estimating information production (over time) and the size of the public domain.

Library Catalogues

Cultural institutions, primarily libraries, have long compiled records of the material they hold in the form of catalogues. Furthermore, most countries have had one or more libraries (usually the national library) whose task included an archival component and, hence, whose collections should be relatively comprehensive, at least as regards published material.

The catalogues of those libraries then provide an invaluable resource for charting, in the form of publications, levels of information production over time (subject, of course, to the obvious caveats about coverage and the relationship of general “information production” to publications).

Furthermore, library catalogue entries record (almost) the right sort of information for computing public domain status, in particular a given record usually has a) a publication date b) unambiguously identified author(s) with birth date(s) (though unfortunately not death date). Thus, we can also use this catalogue data to estimate the size of the public domain — size being equated here to the total number of items currently in the public domain.


To illustrate, here are some results based on the catalogue of Cambridge University Library which is one of the UK’s “copyright libraries” (i.e. they have a right to obtain, though not an obligation to hold, one copy of every book published in the UK). This first plot shows the numbers of publications per year (as determined by their publication date) up until 1960 (when the dataset ends) based on the publication date recorded in the catalogue.

A major concern when basing an analysis on these kinds of trends is is that fluctuations over time derive not from changes in underlying production and publication rates but changes in acquisition policies of the library concerned. To check for this, we present a second plot which shows the same information but derived from the British Library’s catalogue. Reassuringly, though there are differences, the basic patterns look remarkably similar.

CUL data 1600-1960

Number of items (books etc) Per Year in the Cambridge University Library Catalogue (1600-1960).

BL data 1600-1960

Number of items (books etc) Per Year in the British Library Catalogue (1600-1960).

What do we learn from these graphs?

  • In total there were over a million “Items” in this dataset (and parsing, cleaning, loading and analyzing this data took on the order of days — while the preparation work to develop and perfect these algorithms took weeks if not months)
  • The main trend is a fairly consistent, and approximately exponential, increase in the number of publications (items) per year. At the start of our time period in 1600 we have around 400 items a year in the catalogue while by 1960 the number is over 16000.
  • This is a forty-fold increase and corresponds to an annual growth rate of approx 0.8%. Assuming “growth” began only around the time of the industrial revolution (~ 1750) when output was around 1000 (10-year moving average) gives a fairly similar growth rate of around 0.89%.
  • There are some fairly noticeable fluctuations around this basic trend:
    1. There appears to be a burst in publications in the decade or decade and a half before 1800. One can conjecture several, more or less intriguing, reasons for this: the cultural impact of the French revolution (esp. on radicalism), the effect of loosening copyright laws after Donaldson v. Beckett, etc. However, without substantial additional work, for example to examine the content of the publications in that period these must remain little more than conjectures.
    2. The two world wars appear dramatically in our dataset as sharp dips: the pre-1914 level of around 7k+ falls by over a third during the war to around 4.5k and then rises rapidly again to reach, and pass, 7k per year in the early 20s. Similarly, the late 1930s level of around 9.5k per year drops sharply upon the outbreak of war reaching a low of 5350 in 1942 (a drop of 45%), and then rebounding rapidly at the war’s end: from 5.9k in 1945 to 8k in 1946, 9k in 1947 and 11k in 1948!

To do next (but in separate entries — this post is already rather long!):

  • Estimates for the the size of the public domain: how many of those catalogue items are in the public domain
  • Distinguishing Publications (“Items”) from “Works” — i.e. production of new material versus the reissuance of old (see previous post for more on this).

Colophon: Background to this Research

I’m working on a EU funded project on the Public Domain in Europe, with particular focus on the size and value of the public domain. This involves getting large datasets about cultural material and trying to answer questions like: How many of these items are in the public domain? What’s the difference in price and availability of public domain versus non public domain items?

I’ve also been involved for several years in Public Domain Works, a project to create a database of works which were in the public domain.

Colophon: Data and Code

All the code used in parsing, loading and analysis is open and available from the Public Domain Works mercurial repository. Unfortunately, the library catalogue data is not: library catalogue data, at least in the UK, appears to be largely proprietary and the raw data kindly made available to us for the purposes of this research by the British Library and Cambridge University Library was provided only on a strictly confidential basis.

Talk by Frederick Scherer: Deregulatory Roots of the Current Financial Crisis

Last Thursday I attended a talk by Frederick Scherer at the [Judge] entitled: “Deregulatory Roots of the Current Financial Crisis”. Below are some sketchy notes.


Macro story:

  • Huge current account deficit for last 10-15 years
    • Expansionary Fed policy has permitted this to happen while interest rates are low
  • Median real income has not risen since the mid-1970s
    • Cheap money mean personal savings have dropped consistently: 1970s ~ 7%, 2000s ~ 1%
  • Basically overconsumption

Micro story:

  • Back in the old days, banking was very dull — three threes story, “One reason I never worked in the financial industry: it was very dull when I got my MBA in 1958”
  • S&L story of 1980s: inflation squeeze + Reagan deregulation
    • FMs: Fannie Mae, Freddie Mac get more prominent
    • [Ed]: main focus here was on pressure for S&L to find better returns without much mention of the thoughtlessness of Reagan deregulatory approach (deposits still insured but S&L can now invest in anything) and the fraud and waste it engendered — see “Big Money Crime: Fraud and Politics in the Savings and Loan Crisis” by Kitty Calavita, Henry N. Pontell, and Robert Tillman
  • In 1920s there were $2 billion of securitized mortgages (securitazation before the 1980s!)
  • Market vs. bank finance for mortgages: market more than bank by mid-1980s [ed: I think — graph hard to read]
  • To start with: FMs pretty tough when giving mortgages, but with new securitizers and lots of cheap money, standards dropped => moral hazard for issuers [ed: not quite sure why this is moral hazard — securitizers aren’t the ones who should care, it’s the buyers who should care]
  • Even if issuers don’t care, buyers of securitized mortgages should care and they depended on ratings agencies (Moodys, S&P etc)
  • Unfortunately, ratings agencies had serious conflicts of interest as they were paid to do ratings by firms issuing the securities! Result: ratings weren’t done well
  • Worse: people ignored systemic risk in the housing market and therefore made far too low assessment of risk of these securities [ed: ignoring systemic risks implies underestimating correlations — especially for negative changes — between different mortgage types (geographic, owner-type etc). Interesting here to go back and read the quarterly statement from FM in summer 2008 which claims exactly this underestimate.]
  • Banks over-leveraged for the classic reason (it raises your profits if things are good — but you can get wiped out if things are bad)
    • This made banks very profitable: by mid 2000s financial corporations accounted for 30% of all US corporate profits
    • Huge and (unjustified relative to other sectors) wage levels. Fascinating evidence here provide by correlating wage premia to deregulation: fig 6 from Philippson and Reshi shows dramatic association of wage premium (corrected for observable skills) with (de)regulation. Wage premium goes from ~1.6 in 1920s to <1.1 in 1960s and 70s and then back up to 1.6/1.7 in mid 2000s
  • Credit default swaps and default insurance: not entirely new but doubled every year from 2001 to the present ($919 billion in 2001 to $62.2 trillion in 2007)
    • Much of the time CDS issued without any holding of the underlying asset
    • There was discussion on regulating CDSes in 1990s (blue-ribbon panel reported in 1998) but due to shenanigans in the house and senate led by Phil Graham (husband of Wendy Graham who was head of Commodity Futures … Board), CDSes were entirely deregulated via act tacked onto Health-Education-Appropriations bill in 2001.

It goes bad:

  • Housing bubble breaks in 2007 or even 2006
    • Notices of default starts trending upwards in mid 2006
  • [ran out of time]

What is to be done:

  • Need simple, clear rules
    • A regulator cannot monitor everything day-to-day
  • Outlaw Credit Default Swaps
  • Anyone who issues CDOs must “keep skin in the game”
  • Leverage ratios. Perhaps? Hard to regulate.
  • Deal with too big to fail by making it hard for “giants to form” and breaking up existing over-large conglomerates
  • We need to remember history!

Own Comments

This was an excellent presentation though, as was intended, it was more a summary of existing material than a presentation of anything “new”.

Not sure I was convinced by the “remember history” logic. It is always easy to be wise after the event and say “Oh look how similar this all was to 1929”. However, not only is this unconvincing analytically — it is really hard to fit trends in advance with any precision (every business cycle is different), but before the event there are always plenty of people (and lobbyists) arguing that everything is fine and we shouldn’t interfere. Summary: Awareness of history is all very well but it does not provide anything like the precision to support pre-emptive action. As such it is not really clear what “awareness of history” buys us.

More convincing to me (and one could argue this still has some “awareness of history in it) are actions like the following:

  1. Worry about incentives in general and the principal-agent problem in particular. Try to ensure long-termism and prevent overly short-term and high-powered contracts (which essentially end up looking like an call option).

    Since incentives can be hard to regulate directly one may need to work via legislation that affects the general structure of the industry (e.g. Glass-Stegall).

    Summary: banking should be a reasonably dull profession with skill-adjusted wage rates similar to other sectors of the economy. If things get too exciting it is an indicator that incentives are out of line and things are likely to go wrong (quite apart from the inefficiency of having all those smart people pricing derivatives rather than doing something else!)

  2. Be cautious regarding financial innovation especially where new products are complex. New products have little “track record” on which to base assessments of their benefits and risks and complexity makes this worse.

    In particular, complexity worsens the principal-agent problem for “regulators” both within and outside firms (how can I decide what bonus you deserve if I don’t understand the riskiness and payoff structure of the products you’ve sold?). Valuation of many financial products such as derivatives depend heavily — and subtly — on assumptions regarding the distribution of returns of underlying assets (stocks, bonds etc).

    If it is not clear what innovation — and complexity — are buying us we should steer clear, or at least be very cautious. As Scherer pointed out (in response to a question), there is little evidence that the explosion in variety and complexity of financial products since the 80s has actually done anything to make finance more efficient, e.g. by reducing the cost of capital to firms. Of course, it is very difficult to assess the benefits of innovation in any industry, let alone finance, but the basic point that 1940s through 1970s (dull banking) saw as much “growth” in the real economy as the 1980s-2000s (exciting banking) should make us think twice about how much complexity and innovation we need in financial products.

Finally, and on a more theoretical note, I’d also like to have seen more discussion about exactly why standard backward recursion/rational market logic fails here and what implications do the answers have for markets and their regulation. In particular, one would like to know doesn’t knowledge of a bubbles existence in period T lead to its unwinding (and hence by backward recursion to its unwinding in period T-1, and then T-2 etc until the bubble never existed). There are various answers to this in the literature based on things like herding, presence of noise investors, uncertainty about termination, but it would be good to have a summary, especially as regards welfare implications (are bubbles good?), and what policy interventions different theories prescribe.

A History in Bits of Bits in History

I’ve started work on a book on the “Information Age”. Still at a very early stage and largely outlines but I do have a first draft of the introduction which is available below. I also have a tentative title of This Information AgeA History in Bits of Bits in History.


We live in an information age and we live in a digital age — and these twin aspects of our present existence are mutually intertwined. As the volume and role of information have grown so too have the challenges of managing and handling it. By making information digital — converting it into electronic ‘bits’ — we have made it vastly easier to store, manage, analyze and transfer. The triumph of information therefore depends fundamentally on the triumph of the digital — without the powers of digital technology the current cornocopia of information would be impossible, indeed unthinkable. At the same time digital technology \emph{is} fundamentally information technology. Information processing, storage and transmission are what digital technology does and why it exists. Without the driver of our informational needs the digital age would likely never have been born let alone reach maturity.

Thus, information and the digital are symbiotic: the advance of information necessitates the development of digital technology while that technology makes possible, and encourages, the advance of information. Put simply, information drives digitization and digitization drives information.

This book explores these twin proliferations, examining their interaction and the way in which our present, and future, has been, and is being, shaped by these processes. The revolution in communications that started with Morse and Marconi, and the revolution in processing that started with ENIAC and the transistor of Shockley, is still continuing. Almost exactly 50 years on from the first electronic computers of the Second World War period, the last decades of the twentieth century have witnessed another major development in the simultaneous mass adoption of two distinct but related communication technologies: the Internet/World-Wide-Web and mobile telephony. Suddenly cheap digital communication, and the activities it makes possible, are available on a mass, global, scale.

These changes are not to be seen purely at the social or technological level. Already much of the economy is derived directly, or indirectly from transactions in information. The implications are to be seen from the smallest to the largest levels. Today, at the dawn of the 21st century, many of the most well-known, and most powerful companies in the world base their business on information. Perhaps pre-eminent among them is Google, an entity whose business is built upon the acquisition and analysis of the vast information space that is the web — though interestingly their revenue primarily derives not from selling that information directly but from selling the attention it generates.

We should also not forget software that most fundamental and crucial substance of the information age — being both itself the purest kind of information and, as algorithms made ‘flesh’, the information ‘machinery’ that which manages and processes all other data. What is more, software is now not only one of the largest industries on the planet, but is also ubiquitous in practically every kind of business large or small. In a period of a little over 30 years this industry has given us both Microsoft, the most successful monopolist the world has ever seen, and the peer-produced work of the Free/Open-Source movement.

Here then we have information built upon information, data upon data. It is no wonder that some ask whether these spiralling skyscrapers of the digital age ever touch earth, and, if so, where. Are we fashioning ourselves a heaven or a hell? More prosaically, what do we gain, and what do we lose, as everything that glitters becomes bits? Will knowledge be available to all for the cost of an internet connection or ever more proprietarized and controlled? Will democracy be made good by a world of informed and active citizens, or disintegrate into a million insular and self-reinforcing Babels?

This is a technological, economic and social story. It sprawls out over our history, small at first but growing with such rapidity in the last few decades that, information, and its associated ‘revolution’, are now one of the central elements of our world. So recent, rapid and widespread are these changes, ramifying and filtering into so many diverse aspects of our existence, that it is hard for us to here now to comprehend their scope or predict their future and to do full justice to this subject is a clear impossibility.

Instead, in this work we must take a different route, proceeding by way of survey, allusion and illustration. In short we offer a history in bits of ‘bits’ in history.

To Lose a Battle: France 1940 by Alistair Horne

7/10. Well written and fascinating, particularly in its clear demonstration of the way the French just ‘gave up’ (both generally in the inter-war period and in 1940 itself). I would have preferred more analytical clarity regarding exactly when things went wrong and why — at some moments Horne seems to be suggesting that a sufficiently active response by the French in the first few days (between the 12th and the 14th of May) might have made a decisive difference in reversing the tide, at others that the Germans superiority in weapons, tactics and men (quality, not necessarily quantity) meant that France was doomed from the start. The relative success of the few British salleys against the Germans make me incline more towards the former possibility. I also think this view may be warranted by the concerns evinced so frequently by those within the German General Staff (and Hitler himself) about the vulnerability of their flanks, as well as the huge convoys through the Ardennes, in the first few decisive days of the battle. If this is the case, it shows that what is today considered one of the greatest and most brilliant military victories of all time might well have ended up as another failed Schlieffen plan.

Death in Hamburg: Society and Politics in the Cholera Years, 1830-1910 by Richard Evans

6/10. Death in Hamburg: Society and Politics in the Cholera Years, 1830-1910 by Richard Evans

This book promises much but ultimately rather disappoints, largely because of its tendency to lose focus, sprawling into this of that side-avenue. Partly this must be due to a lack of clarity as to what the book is about — an impression strongly reinforced by the book’s afterword which does much to illuminate the intentions and the author.

Is this a narrative history? An analytic investigation of the of public health provision, focusing especially on the 1892 epidemic? A wide-ranging overview of Hamburg society and the mentality of its dominant classes, a marxist-influenced study of class tension and conflict or …? The author does not seem to be sure. The result is rather a mish-mash.

At some points we seem to be investigating the political and social reasons for Hamburg’s poor public health outcomes, in particular the constant fighting between the different ‘fractions of capital’ (in particular the merchant/lawyer senators and the property-owners) over the provision of public goods, at others having a detailed description of working class living conditions, and at another a history of medical approaches to cholera and other diseases in the 19th century, and at another describing in detail how the ‘dominant’ classes used charitable support both in general, and after the 1892 epidemic, to exercise social and moral ‘control’.

Of course, it is possible these different approaches and angles could have been woven together to produce a single rich and compelling whole. But this is not so. To take the main focus of the book — which I take to be the Cholera epidemic of 1892 together with its causes and outcomes. By the time I had finished the more than 700 pages I was still unsure as to what, in Professor Evan’s view, were the main reasons for Hamburg’s terrible performance in comparison with other German (or European) cities. To pick just a few of the possible ones:

  1. The failure to develop sand-filtration for the public water supply. Was this in turn due to:
    • The form of the Citizens’ Assembly, in particular the ability of the property owners to block improvements that might result in reductions in their profits.
    • Early investment in a new water system which then made it relatively more costly to upgrade later (Hamburg was one of the first cities in Germany to develop an external resevoir).
    • Ideological opposition (see next items)
  2. The ideological commitment of Hamburg’s ruling groups to ‘Trade’ and ‘Laissez Faire’
    • Reinforced, perhaps, by direct self-interest in the case of ship-owners and others for whom quarantine meant serious disturbance to their work or enterprise
  3. The inefficient governance structures (in particular the operation and make-up of the Senate and Burgomaster)
    • Hamburg’s governance compared particularly poorly with the more efficient, though also more, authoritarian action of the Imperial government (particularly that of the Imperial Health Office and Koch).
  4. Continuing support in medical circles (and in administrative positions) for ‘miasmatist’ rather than ‘contagionist’ theories of disease (especially in relation to Cholera)
  5. The inadequate living conditions of the poor especially in the ‘Alley Quarters’.
  6. Incorrect medical treatment either due to lack of medical knowledge or incompetence.
  7. The (in)ability of different socio-economic groups to follow the medical instructions provided — whether because of wealth (e.g. ‘rich’: able to have their servant boil all their water, ‘poor’: unable to resist the fruit which is suddenly cheaply available because normally denied it), literacy (can one read the instructions distributed), respect for ‘authority’, etc.

One would not expect to have a single explanation put forward but it would be useful to have some indication of which of these items were the more important, particularly where different reasons are substitutes not complements. For example, at several points Evans appears to indicate that the water-supply was the single biggest determinant of death by far (he cites a particularly illuminating comparison of a set of apartments that drew its water from two different sources). But if this is so then almost all of the focus should be on the water-supply question and why this public good was not present in Hamburg when it was elsewhere. No doubt, in answering this, one will be lead onto many of the other items as secondary causes but it an important step will have been made in stratifying, and thereby clarifying, the analysis. Furthermore, from this perspective an explicit comparative analysis with other localities becomes essential. While Evans does perform this to some extent, it is largely in terms of the behaviour of the localities in 1892 (e.g. re. imposition of quarantine) rather than the more important investigation of why those localities had sand filtration while Hamburg did not — in particular why had they found the political will to provide this important public good while Hamburg had not? In particular, why were the property-owners in Bremen, Berlin and elsewhere not able to block these same kinds of public infrastructure projects?

Once lead down this route the reader must be increasingly concerned about the weight, and attention, Evans focuses upon socio-ideological explanations (made particularly noticeable by the frequent intrusion of Marxist historiographical language and approach — an influence made explicit in the afterword). As Evans acknowledges in respect of most other disease outcomes Hamburg did little worse than elsewhere in Germany. If this is so how much does the 1892 epidemic really tell us about the society and politics of Hamburg (and vice-versa)? Perhaps if Hamburg had not invested early in its water supply, it would have had an ‘out-of-date’ one by 1892? Perhaps if Veresman had been Burgomaster more rapid and effective steps would have been taken early on that would have dramatically reduced the impact? Perhaps if Hamburg had been more authoritarian (rather than more democratic) the Senate would have been able to improve the water-supply earlier?

This brings me on to my final comment. The contemporary relevance of the book is emphasized in several places, for example on the back-jacket text and in several of the blurbs — Gordon Craig’s NYRB review extract quoted on the cover reads “… about the contemporary relevance of this book there can be no question”. Of course, we should allow for the fact that this was published in 1987 when the AIDS epidemic was receiving very widespread attention. But one does need to ask exactly what one does learn from this book regarding public health? That we should invest in public goods projects? That it is good for medical science to be accurate and correct? That one should respond rapidly to an outbreak of a contagious disease?

Surely the answer to all of these is yes. The devil, of course, is in the detail. how do we trade off the benefits of rapid and sharp response, which is likely to involve sharply restricting movement of persons and goods, against the costs of such restrictions both socially and commercially? What institutional structures will result in adequate investment in public goods and rapid response to public concerns? Are there tensions between responsiveness to concerns (e.g. via full representative government!) and effectiveness in action (which might necessitate a single executive office with significant power and autonomy)? Finally, if the answers to these questions are reasonably obvious (e.g. its Democracy stupid!) then what prevents a polity, whether today or in the 19th century, from acting in the correct way? (Answer: entrenched powers and vested interests — but how did these come into being and how are they overcome?).

The test then of Evans’ book is whether it supplies us with interesting answers to these, more nuanced, questions. In this regard the book, I feel, comes up short. Without a comparative analysis at the social, and more importantly, political level in other German (or European) cities how can we know whether Hamburg’s terrible experience was the result of a common generalizable pattern or mere historical accident?

In sum this an interesting book albeit a little lengthy and heavy-going in places. Confused as to its structure and purpose it largely fails to deliver on its promise to answer the main question posed on its jacket: “Why were nearly 10000 people killed in six weeks in Hamburg while most of Europe was left almost unscathed?” As such it is also limited in the light it can throw on public health problems today. Nevertheless the reader will have been left with a wide-ranging coverage of a whole variety of 19th century topics, most significantly the two items explicitly mentioned in the title: Hamburg and Cholera.

Cannibalism and the Common Law by A Simpson

7/10. Cannibalism and the Common Law: The Story of the Tragic Last Voyage of the Mignonette and the Strange Legal Proceedings to Which It Gave Rise by A Simpson, University of Chicago Press, 1984. More history than legal analysis. Interesting throughout but meandering slightly towards the end. One quote I wish to memorialize, which though rather apart from the main thrust of my book, made me wonder once again about the general tension between ‘definiteness’ (assertiveness/simplicity) and ‘correctness’, especially in the arena of public policy and democratic politics. Is it always necessary, as the quote suggests, for successful campaigns to simplify and exaggerate in order to obtain an effect?

In the period immediately before the case of the Mignonette [1884], controversies over the protection of sailors and passengers had been inflamed by the activities of the radical MP for Derby (1868-80), Samuel Plimsoll, ‘the sailor’s friend’, whose approach to the problem favoured prior intervention [i.e. regulation] … He [Plimsoll] concentrated first simply on unseaworth ships as a cause fo mortality and started a campaign to amend the law with a resolution in the House of Commons in July 1870. His most effective appeal was to public opinion through the publication of Our Seamen in 1872, attacking the ship-owners of the over-insured, overloaded “coffin ships”, which caught the public imagination. Plimsoll was no doubt careless with his facts, ill-informed, and sometimes violent in his language; but perhaps successful campaigns require devils, conspiracies and simple solutions. [emph added] In reality ships were lost for a variety of reasons, and unseaworthiness was only one of them.

Is Game Theory of Any Value for the Historical Analysis of Institutions?

I was much much struck by generally pessimistic tone of Gregory Clark’s lengthy review in the JEL’s September issue of Avner Greif’s Institutions and the Path to the Modern Economy. These comments have wider implications for the application of economic tools (especially game theory) to the analysis of historical outcomes, particularly in relation to institutions, and I have therefore thought it worth excerpting from the review here (at some length).

When You Have More Variables than Data You Aren’t Explaining Anything

As noted, Greif defines an institution as a self-reinforcing set of behaviors. Greif pioneered in applying game theory to historical institutional analysis and his 1993 study of the Maghribi traders remains a classic of this still modest genre. This was certainly an exciting development for economists. For the first time, [sic] seemingly grounded the explanation of informal institutions in optimizing individual rational behavior. Behaviors that would seem to the layman to be based on blind irrational custom could be shown to be consistent with individual optimization. Given the incredible intellectual elaboration of game theory, and its meager harvest in terms of actual economic applications, the finding was welcome to both game theorists and to economic historians. [ed: a tough but fair assessment …] The Maghribi study also allowed for the possibilities of institutional change resulting just from changes in parameters. Since the equilibrium depended on certain parameter values, changes in transportation costs or observability could terminate the old equilibrium and lead to a new one. The 1993 article seemed to point to new micro foundations for institutions that would ground them in individual maximizing behavior.

But this book is almost certainly not what many economists who welcomed the 1993 article expected as the generalization of its ideas. Some indeed will be shocked by, and perhaps hostile to, the path Greif has taken. Were economists of a more literary bent, the word apostasy would be on their lips. In a search for generality, Greif concludes that such a set of limited rational actor assumptions is not constraining enough to describe real-world institutions. For a start, “multiple equilibria usually exist in the repeated situations central to institutional analysis” (p. 125). There have to be more constraints on the structure of the interaction to explain the equilibrium. These constraints include “cognitive norms” (p. 128) as well as “the social and normative foundation of behavior” (p. 143). Issues such as “losses of esteem,” “norms,” “fairness,” or “social exchange” have to be introduced. Also such social and normative behavior is “situationally contingent” (p. 144). [ed: and we now have so many parameters we could probably explain anything …]

Greif posits this as just an extension and elaboration of the original individualistic rational-actor game theoretic ideas. Once we are compelled to admit, however, into the explanatory apparatus almost the entire sociological zoo of ill defined and unmeasurable constructs, we lose all explanatory power. Explanatory power requires few objects and small degrees of freedom. Greif notes that “a useful feature of game theory is that it allows us to study all intertransactional linkages—economic, coercive, social and normative—simultaneously” (p. 147). But he does not seem to appreciate the price of this generality in terms of testability. All we are left with is the idea that people operating within institutions act as they do because, given the cognitive, intellectual, cultural, and normative constraints they face, their actions seem to them as being the best available. But, in an informal sense, we knew that already. Without any consideration of the ins and outs of game theory, we can appreciate that any lasting institution likely constitutes some set of self-reinforcing behaviors. Yanomamo males, for example, engaged in recurrent raids against other bands aimed at capturing women and revenging previous raids (Napoleon A. Chagnon 1983). This was clearly an institution in the sense of Greif and must be maintained by some kind of self-reinforcing set of behaviors. But we knew that, even if we had never studied game theory. So what insights have we gained from page after page of elaboration on the idea of equilibria and the elements that enter into them (pp. 124-53)? If we were able to reduce all such social equilibria to a game theory equilibrium of purely self interested rational individuals interacting with common knowledge that would be a radical, novel, and testable theory. This book denies that possibility, but without providing any alternative that has empirical content. [pp. 735-736, emphasis added]

The Problem of Too Many Equilibria (in Dynamic Games with Beliefs)

… Greif here starts from the basis that we will never be able to predict institutional structure from exogenous features of the situation—including institutional history. … Given the many potential stable equilibria in each institutional context, the outcomes are inherently unknowable. After the attention given to elaborating the theory of institutional stability and dynamics in the preceding 350 pages, this conclusion comes as something of a surprise. The structure and tone of the previous discussion is that of laying the groundwork for a theory of institutions. The reader now learns that the extended theory encompasses a perhaps uncountable number of possible institutional equilibria, so that there can be no advance prediction.

Just as deductive methods cannot succeed, Greif asserts also that inductive generalization about institutional forms will also fail to reveal any patterns. This is because unobservable elements of the situation—beliefs and norms—are crucial to the determination of the outcome. The same observable elements will be associated with radically different institutional equilibria. … [p.737]

Case Studies (and Historical Anecdote) Aren’t Economics (or Economic History

[The empirical approach recommended by Greif] As conducted in the book, [this] is essentially the method of “analytical narratives” popularized by Greif and Robert Bates, Margaret Levi, Jean-Laurent Rosenthal, and Weingast. An analytical narrative consists of matching institutional detail to a formal, or more often informal, interpretation of the situation as some kind of rational choice equilibrium, interpreted in the broad sense above (Bates et al. 1998). It is not clear how this is distinguished from such things as Harvard Business School case studies. As applied by Greif and his colleagues, an “analytical narrative” seems to be just an interpretation of an institution in terms of a loosely defined equilibrium. This is fine as an approach to generating hypotheses, but as an endpoint of analysis, as it generally is in the book, it offers little conviction. [p. 737]

In Conclusion: There Isn’t Much of a Future

… Greif intends in his book to develop at least the outline of a new, micro grounded theory of institutions. Stating, explaining, and elaborating this theory takes 503 densely written pages, including a primer on game theory. By the end, however, this reviewer, to the contrary, read it mostly as a demonstration of the impossibility of a systematic account of institutions along the lines he proposes. The efflorescence of concepts, combined with the constriction of possible empirical tests, makes … prediction and testing impossible. And this shows in the case studies conducted in the book. Each institution in his formulation has to be analyzed in its full idiosyncracy, aided by the expert judgment of the investigator as to the social and epistemological context. But, as we saw in the case of the Podesteria, that kind of analysis, even in the hands of a careful enquirer like Greif, is fraught with the danger of conflating conjecture and fact. Kant’s Prolegomena to any Future Metaphysics as a Science never led to his proposed science of metaphysics. Unfortunately Greif’s Prolegomena to a future institutional theory similarly serves mainly to indicate the barriers to a science of institutions.

Overlord: D-Day and the Battle for Normandy 1944 by Max Hastings

7.5/10. Finished a few weeks ago this is another (rather earlier) example of Hastings’ skill in writing penetrating and engaging military history, as well as his willingness to be critical of existing ‘sacred cows’. Among other things Hastings:

  • Argues that the famous Mulberrys were probably a waste of time and resources.
  • Shows how the Air Force extreme unhelpfulness (largely driven by their own ambitions and obsession with civilian bombing) was a serious handicap to the whole campaign.
  • Supplies a sharp corrective regarding Patton’s reputation, pointing out that up against reasonable German opposition Patton did little better than anyone else.
  • Shows clearly how it was Hitler, almost more than anyone else, who contributed to the disastrous collapse of German forces in August-October 1944 by his insistence that no retreat of any kind be considered.
  • Provides many examples of the poor quality of equipment, leadership, and men, especially among the American forces and how these deficiencies hindered the Allied campaign. In particular, Allied tanks were almost never a match for their German counterparts and on any occasion that Allied and German troops met on anything near equal footing the Germans won.1 In addition he details several clear cases of simple cowardice or unwillingness to fight among the Allied troops and/or extremely poor leadership stretching from the lowest levels to the highest. This is not to criticize — who can say what they would do in such circumstances — and in many reflects the fact that while the Germans were a nation that had for many years been ‘obsessed’ with soldiering the Allied troops were ‘civilians in uniform’, but it does supply a useful corrective to those rose-tinted visions supplied by films such as The Longest Day or the newsreel footage showing Allied soldiers racing past cheering French civilians.

Finally, and as an aside, while good, the book also displays the limitations of the traditional book format as a method for presenting this sort of material (i.e. military history with its strong connections between the temporal and spatial aspects of events). At least for me, the attempt to render particular troop movements, or the direction of battles, in prose never really succeeds and one finds oneself constantly flicking back to the (rather limited) maps in an attempt to connect the descriptions of events, the failures and successes of particular thrusts, with their location, both geographically and within the overall direction of the campaign. Thus, it seems to me that it is that this kind of subject is the sort thing most suited to being integrated with the kind of approach proposed by the Microfacts / Weaving History project currently in the early stages of its development at the Open Knowledge Foundation. Here one would be able to marry maps with descriptions, photos with actions, time with space to provide a much clearer insight into what was going on.

On a man for man basis, the German ground soldier consistently inflicted casualties at about a 50% higher rate than they incurred from opposing British and American troops UNDER ALL CIRCUMSTANCES . [emphasis in original] This was true when they were attacking and when they were defending, when they had local numerical superiority and when, as was usually the case, they were outnumbered, when they had air superiority and when they did not, when they won and when they lost.

It is undoubtedly true that the Germans were much more efficient than the Americans in making use of available manpower. An American army corps staff contained 55 per cent more officers and 44 per cent fewer other ranks than its German equivalent. …

Events on the Normandy battlefield demonstrated that most British or American troops continued a given operation for as long as reasonable me could. Then – when they had fought for many hours, suffered many casualties, or were running low on fuel or ammunition – they disengaged. The story of German operations, however, is landmarked with repeated examples of what could be achieved by soldiers prepared to attempt more than reasonable men could.”

  1. From p. 84 ff. “The American Colonel Trevor Dupuy has conducted a detailed statistical study of German actions in the Second World War. Some of his explanations as to why Hitler’s armies performed so much more impressively than their enemies seem fanciful. But no critic has challenged his essential finding that on almost every battlefield of the war, including Normandy, the German soldier performed more impressively than his opponents: 

Path-Dependent vs. Ergodic Systems

Consider a metal arm fixed by a pin. If it is hung vertically then the arm, no matter where it starts, will always end up in the same position. However, if you fix the arm (perfectly) horizontally it will stay forever in its initial position. The first case is ergodic: we converge independent of the starting point to some particular configuration; while the second is ‘path-dependent’ (or dependent on initial conditions): where you end up depends crucially on where you start. The question:

Is animal/technological/historical/linguistic evolution ergodic or path dependent?

More generally, how ergodic or path-dependent are the following processes?

  • (Natural) Evolution
  • Technological change
  • Human history
  • Communication systems such as natural languages
  • Other symbol systems (e.g. games or mathematics)