Category Archives: Notes

Historical Banking Crises and the Rules of the Game

Attended an interesting talk today: “Historical Banking Crises and the Rules of the Game” by Professor Charles Calomiris, Columbia Business School. Sporadic notes below. See also this Weaving History thread on Financial Crises.

Notes

  • One crisis with 20 different explanations. Need to sort these out a little.
  • If banks are uninsured then in a recession banks cut their supply of loans
    • Banks are facing losses, need to bulk up their balance sheet and can do it either by raising equity or cutting supply of loans. Former is hard so do the latter.
  • Crisis aren’t just inherent to human nature or capitalism. “Crisis propensity reflects politically determined rules of the banking game that are conducive to crises:”
    1. industry setup that determines exposure of banks to risk
    2. absence of decent (effective and incentive compatible) central-banking (NB: 2 isn’t a big problem w/o 1)
    3. subsidization of risk by govt policies
  • Panic = moments of severe sudden withdrawal that threatened the system. Observable variable: collective action by NY clearing banks
    • In US (19th and early 20th c.): 1857, 1873, 1877, 1893, 1907 [ed: missing at least 2 and may have got wrong I think]
    • All of 6 crises in US post civil war were all preceded by 50% increase in liabilities and 7% drop in stock market
    • Britain: 1825, 1836, 1847, 1857, 1866 then none for over a century
  • Solvency crisis: -ve net worth of failed bank > 1% of GDP
    • 140 examples since 1978
    • Rare in past: 4 in 1873-1913
    • Australia: 1893 (10%)
    • Argentina: 1890 (10%)
    • Norway: 1900 (3%)
    • Italy: 1893 (1%)
  • Literature has converged in last 20 years to agree that safety-net provision on balance increases instability (rather than reducing it)
  • Crucial reform in 1858 in UK following 1857 crisis. BoE would no longer intervene in bills market. In 1866 made good on this promise when largest bill discounter went bust (Overend and Gurney)
  • Crisis origins:
    • Loose money: CBs, flat yield curve … (but note not enough for a crisis on own)
    • Housing subsidies delivered by leverage. F&F have $1.6 trillion out of $3 trillion total subprime. $350 billion cost on F&F alone.
    • Huge buy-side agency problems
      • Lots of buy-side people buying poor quality material for clients facility by big race-to-the-bottom at ratings agency
    • Prudential regulation failure
  • Everyone smart knew there was a subprime crisis in mid-2006.
  • Long-term regulatory reforms
    • Micro-prudential reform: focus on measurement of risk
    • Credit rating agency reform
    • Resolution policy/TBTF Problems

The Elusive Disappearance of Community

From Laslett ‘Phillipe Ariès and “La Famille”‘ p.83 (quoted in Eisenstein, p.131):

The actual reality, the tangible quality of community life in earlier towns or villages … is puzzling … and only too susceptible to sentimentalisation. People seem to want to believe that there was a time when every one belonged to an active, supportive local society, providing a palpable framework for everyday life. But we find that the phenomenon itself and its passing — if that is what, in fact happened– perpetually elude our grasp.

Talk by Frederick Scherer: Deregulatory Roots of the Current Financial Crisis

Last Thursday I attended a talk by Frederick Scherer at the [Judge] entitled: “Deregulatory Roots of the Current Financial Crisis”. Below are some sketchy notes.

Notes

Macro story:

  • Huge current account deficit for last 10-15 years
    • Expansionary Fed policy has permitted this to happen while interest rates are low
  • Median real income has not risen since the mid-1970s
    • Cheap money mean personal savings have dropped consistently: 1970s ~ 7%, 2000s ~ 1%
  • Basically overconsumption

Micro story:

  • Back in the old days, banking was very dull — three threes story, “One reason I never worked in the financial industry: it was very dull when I got my MBA in 1958″
  • S&L story of 1980s: inflation squeeze + Reagan deregulation
    • FMs: Fannie Mae, Freddie Mac get more prominent
    • [Ed]: main focus here was on pressure for S&L to find better returns without much mention of the thoughtlessness of Reagan deregulatory approach (deposits still insured but S&L can now invest in anything) and the fraud and waste it engendered — see “Big Money Crime: Fraud and Politics in the Savings and Loan Crisis” by Kitty Calavita, Henry N. Pontell, and Robert Tillman
  • In 1920s there were $2 billion of securitized mortgages (securitazation before the 1980s!)
  • Market vs. bank finance for mortgages: market more than bank by mid-1980s [ed: I think -- graph hard to read]
  • To start with: FMs pretty tough when giving mortgages, but with new securitizers and lots of cheap money, standards dropped => moral hazard for issuers [ed: not quite sure why this is moral hazard -- securitizers aren't the ones who should care, it's the buyers who should care]
  • Even if issuers don’t care, buyers of securitized mortgages should care and they depended on ratings agencies (Moodys, S&P etc)
  • Unfortunately, ratings agencies had serious conflicts of interest as they were paid to do ratings by firms issuing the securities! Result: ratings weren’t done well
  • Worse: people ignored systemic risk in the housing market and therefore made far too low assessment of risk of these securities [ed: ignoring systemic risks implies underestimating correlations -- especially for negative changes -- between different mortgage types (geographic, owner-type etc). Interesting here to go back and read the quarterly statement from FM in summer 2008 which claims exactly this underestimate.]
  • Banks over-leveraged for the classic reason (it raises your profits if things are good — but you can get wiped out if things are bad)
    • This made banks very profitable: by mid 2000s financial corporations accounted for 30% of all US corporate profits
    • Huge and (unjustified relative to other sectors) wage levels. Fascinating evidence here provide by correlating wage premia to deregulation: fig 6 from Philippson and Reshi shows dramatic association of wage premium (corrected for observable skills) with (de)regulation. Wage premium goes from ~1.6 in 1920s to <1.1 in 1960s and 70s and then back up to 1.6/1.7 in mid 2000s
  • Credit default swaps and default insurance: not entirely new but doubled every year from 2001 to the present ($919 billion in 2001 to $62.2 trillion in 2007)
    • Much of the time CDS issued without any holding of the underlying asset
    • There was discussion on regulating CDSes in 1990s (blue-ribbon panel reported in 1998) but due to shenanigans in the house and senate led by Phil Graham (husband of Wendy Graham who was head of Commodity Futures … Board), CDSes were entirely deregulated via act tacked onto Health-Education-Appropriations bill in 2001.

It goes bad:

  • Housing bubble breaks in 2007 or even 2006
    • Notices of default starts trending upwards in mid 2006
  • [ran out of time]

What is to be done:

  • Need simple, clear rules
    • A regulator cannot monitor everything day-to-day
  • Outlaw Credit Default Swaps
  • Anyone who issues CDOs must “keep skin in the game”
  • Leverage ratios. Perhaps? Hard to regulate.
  • Deal with too big to fail by making it hard for “giants to form” and breaking up existing over-large conglomerates
  • We need to remember history!

Own Comments

This was an excellent presentation though, as was intended, it was more a summary of existing material than a presentation of anything “new”.

Not sure I was convinced by the “remember history” logic. It is always easy to be wise after the event and say “Oh look how similar this all was to 1929″. However, not only is this unconvincing analytically — it is really hard to fit trends in advance with any precision (every business cycle is different), but before the event there are always plenty of people (and lobbyists) arguing that everything is fine and we shouldn’t interfere. Summary: Awareness of history is all very well but it does not provide anything like the precision to support pre-emptive action. As such it is not really clear what “awareness of history” buys us.

More convincing to me (and one could argue this still has some “awareness of history in it) are actions like the following:

  1. Worry about incentives in general and the principal-agent problem in particular. Try to ensure long-termism and prevent overly short-term and high-powered contracts (which essentially end up looking like an call option).

    Since incentives can be hard to regulate directly one may need to work via legislation that affects the general structure of the industry (e.g. Glass-Stegall).

    Summary: banking should be a reasonably dull profession with skill-adjusted wage rates similar to other sectors of the economy. If things get too exciting it is an indicator that incentives are out of line and things are likely to go wrong (quite apart from the inefficiency of having all those smart people pricing derivatives rather than doing something else!)

  2. Be cautious regarding financial innovation especially where new products are complex. New products have little “track record” on which to base assessments of their benefits and risks and complexity makes this worse.

    In particular, complexity worsens the principal-agent problem for “regulators” both within and outside firms (how can I decide what bonus you deserve if I don’t understand the riskiness and payoff structure of the products you’ve sold?). Valuation of many financial products such as derivatives depend heavily — and subtly — on assumptions regarding the distribution of returns of underlying assets (stocks, bonds etc).

    If it is not clear what innovation — and complexity — are buying us we should steer clear, or at least be very cautious. As Scherer pointed out (in response to a question), there is little evidence that the explosion in variety and complexity of financial products since the 80s has actually done anything to make finance more efficient, e.g. by reducing the cost of capital to firms. Of course, it is very difficult to assess the benefits of innovation in any industry, let alone finance, but the basic point that 1940s through 1970s (dull banking) saw as much “growth” in the real economy as the 1980s-2000s (exciting banking) should make us think twice about how much complexity and innovation we need in financial products.

Finally, and on a more theoretical note, I’d also like to have seen more discussion about exactly why standard backward recursion/rational market logic fails here and what implications do the answers have for markets and their regulation. In particular, one would like to know doesn’t knowledge of a bubbles existence in period T lead to its unwinding (and hence by backward recursion to its unwinding in period T-1, and then T-2 etc until the bubble never existed). There are various answers to this in the literature based on things like herding, presence of noise investors, uncertainty about termination, but it would be good to have a summary, especially as regards welfare implications (are bubbles good?), and what policy interventions different theories prescribe.

Of Mice and Academics: Examining the Effect of Openness on Innovation

Just came across an interesting working paper put out last Autumn that is relevant to the openness and innovation debate. Entitled: Of Mice and Academics: Examining the Effect of Openness on Innovation and authored by Fiona Murray, Philippe Aghion, Mathias Dewatripont, Julian Kolev and Scott Stern, it is an attempt to bring some empirical evidence to bear in an area that so far has seen little.

It uses a natural experiment in the late 1990s when there was a significant reduction in patent restrictions (increase in openness) related to use of genetically engineered mice. Similar to an earlier paper of Stern and Murray’s the paper estimates the impact on science by exploiting the linkage between certain papers and particular genetically engineered mice (both those affected by increase in openness and those that were not). The overall conclusion is that increased openness does have a significant positive impact. (Which does something to bear out the suggestions of existing theoretical work such as Bessen and Maskin’s on Sequential Innovation and my paper on Cumulative Innovation and Experimentation — which explicitly discusses impacts of IP on scientific experimentation).

For full summary see the abstract inlined below (emphasis added):

Scientific freedom and openness are hallmarks of academia: relative to their counterparts in industry, academics maintain discretion over their research agenda and allow others to build on their discoveries. This paper examines the relationship between openness and freedom, building on recent models emphasizing that, from an economic perspective, freedom is the granting of control rights to researchers. Within this framework, openness of upstream research does not simply encourage higher levels of downstream exploitation. It also raises the incentives for additional upstream research by encouraging the establishment of entirely new research directions. In other words, within academia, restrictions on scientific openness (such as those created by formal intellectual property (IP)) may limit the diversity and experimentation of basic research itself. We test this hypothesis by examining a “natural experiment” in openness within the academic community: NIH agreements during the late 1990s that circumscribed IP restrictions for academics regarding certain genetically engineered mice. Using a sample of engineered mice that are linked to specific scientific papers (some affected by the NIH agreements and some not), we implement a differences-in-differences estimator to evalu- ate how the level and type of follow-on research using these mice changes after the NIH-induced increase in openness. We find a significant increase in the level of follow-on research. Moreover, this increase is driven by a substantial increase in the rate of exploration of more diverse research paths. Overall, our findings highlight a neglected cost of IP: reductions in the diversity of experimentation that follows from a single idea.

ESRC Well-Being Research Workshop at the LSE

Last Friday I attended an ESRC Research Workshop on Well-Being held at the LSE. According to the blurb:

The time is ripe for a major expansion of well-being research in Britain – in conjunction with leading overseas colleagues. Among public policy-makers, there is an increasing desire to promote well-being and a need for evidence on what works to promote it. And among social scientists there is a new capacity to throw light on well-being: its causes and its effects. Worldwide, research on these topics has already demonstrated the scope for rapid and important advances in knowledge. But the scale of such research in Britain is far too small. This one-day workshop has been organised to explore the possible intellectual content of such a cooperative endeavour.

Some of the most prominent researchers in this area were in attendance to give an overview of current work and I took some ‘impressionistic’ notes which can be found below.

Well-being Research: the Way Forward by Daniel Kahnemann

  • Living and thinking about it
  • Attention
  • There are 2 selves
    • Experiencing self
    • Remembering/Score-keeping self
  • Used to think that experiencing self was what was important (Edgeworth)
  • Remembering self not very accurate — cites own research on pain for medical procedures
  • But now thinks remembering self is more important
    • Implicit in this is an acceptance that there are at least 2 distinct dimensions
  • Current well-being/happiness questions are problematic because they are mixed containing some experiencing self and some evaluative/remembering self
  • New, huge, dataset from Gallup is making a big difference
    • 1000 people polled a day with 40 questions on well-being
  • Ladder of Life question in Gallup measures ‘Life Evaluation’
  • Despite having different questions replicates existing results from DRM etc
  • Attention and ‘Focusing Illusion’
    • Norbert Schwarz study: how much pleasure do you get from your car
      • Reasonable correlation with car monetary value
    • Also asked: how much pleasure did you have in your commute this morning
      • Zero correlation with monetary value
    • How many dates did you have last month and how happy are you these days
      • Happy first, dating second no correlation in response
      • Reverse order: large correlation
    • Leads to errors in prediction since we know attention alters valuation
      • e.g. to predict pleasure/utility from car need to ask: how much enjoyment do I get from car when I do not think about it
      • How happy would you be if you moved to be the california
      • But this is mistaken [ed: is this not often taken into account as evidenced by phrase 'always think the grass is greener']
  • Gallup data: huge correlation with money
    • Remembered happiness: Money worries, health coverage, general health are main predictors
    • Experienced happiness: pain + social activities
    • Children: negative impact on general impact but when asked about children people are very positive — both views are correct
  • Easterlin hypothesis:
    • Some questioned this (Stevenson and Wolfers)
    • But focus on ladder of life question
    • Looking at positive/negative affect still find that within-group slope with income is steeper than across group/time effect (i.e. Easterlin hypothesis)

Income and happiness in developed countries by Steve Nickell

  • No obvious relationship on the ladder of life question
    • But cross-country regressions are pretty dubious (too many variables)
  • Time-series data
    • Happiness regressed on log and quadratic in log income plus controls — pretty good fit
    • Curvature for classic CES: y^{1-rho} get rho ~ 1.2 across a whole variety of countries (1.1 – 1.4)
    • Time series: happiness is pretty horizontal (in the US) though income risen lot (even taking account of dispersion)
    • Some reasonable support for relative income hypothesis
  • But really want panel data (deals with endogeneity)
    • Only one such panel: GSOEP (West Germany)
    • Regress happiness on log income, log reference income, controls (state,year,individual dummies etc)
    • Income alone: large +ve coefficient
    • Include relative income: income coefficient disappears

Income and the Evaluation of Life by Angus Deaton

  • Gallup’s World Poll
    • Why it’s great [ed: the value of having early access to proprietary data!]
  • Gallup result is very similar to the World Values Survey (His paper from last year — [ed] see my comments last year)
  • Could argue that steep and then flat but log sems to fit better
    • Difference here with Steve Nickell
  • Within country analysis
    • Collecting income data within country is hard particularly in poorer countries
    • Get figure of about 0.6 (effect of increase of 1 in log income on ladder)
  • What about Easterlin?
    • Does some analysis in the US and does not get relative income affects at all (with ladder question)
  • Suppose people do care about relative income. There are serious (‘ethical’) problems with a consumption tax or not worrying about GDP growth: you hurt the non-envious and help the envious

Questions on Preceding

  • My question:
    1. If focusing illusion is common across goods does it actually end up leading to bias in/incorrect choices
    2. Once we accept that attention has such large effects it poses difficult questions since it suggests that people’s preferences/enjoyment has a significant endogenous component.
  • Several on relative income
  • Replication across countries

Workshop on happiness research by Michael Marmot, Andrew Steptoe, and Jane Wardle

Michael Marmot

  • Health as a measure of well-being
  • 28 year gap in life-expectancy between poorest part of Glasgow (Galton – 54) and richest (Lenzie – 82)
  • Major wealth effects on health outcomes even though (e.g. in the UK) people have all got enough to have pretty good healthcare
    • Relative effects of income (status?) has a major impact on health
    • Relative position not relative income (income != status — at least not always)
  • Control for environment
    • Whitehall II study: look at poor physical health by deprived living area and grade level in civil service. Deprivation really matters when you are in the lower grades. [ed]: Suggests a) interaction effect b) that status matters more than area you live in
  • Work stress: Coronary heart disease strongly linked to work stress
  • Social relationships: mainly important on negative side (bad interactions are bad for you …)
  • [ed: general murmurings from room throughout data presentation about what these correlations imply. Significant issues of causality and selection bias ...]

Andrew Steptoe

  • Meta analysis of positive affect and health
    • 18% reduction in prob. of mortality (even when controlling for other variables: smoking, BMI, social position etc)
  • Issues:
    • Confounding: even though have controls direction of causation goes the other way (health to positive affect)
    • Genetics: simple correlation
    • Lifestyle: happier people lead healthier lives (or vice-versa)
    • Biology: positive affect associated with
      • Lower cortisol over working and non-working days
      • Lower heart rate over day
      • Lower systolic BP over the day
      • Reduced inflammatory responses
      • Independent of socio-demographic factors
  • Happiness measure matters (a lot)
    • Using retrospective questionnaire measures find no relationship of positive affect with other stuff
    • But using EMA or DRM (i.e. more instantaneous stuff) find relationships
  • Cross-cultural comparisons: Japan vs. the UK
    • Japan reports less +ve affect then UK (e.g. Gallup)
    • Find this in DRM studies of university women
    • And, importantly, find impact on cortisol levels (UK women lower than Japan)

Mapping Pain and Well-Being in Real Time and Yesterday by Alan Krueger

  • Study in the Lancet (w/ Arthur Stone) on pain in general population (using diary study)
    • Data came from PATS, ~3900 people (by Gallup)
    • Pain rises with age but very flat 45 – 65 (for men and women)
    • Correlated with SES: poorer people in more pain (~20% of people with income under $30k in reasonable to severe pain compared to 7% for > $100k)
    • People in pain work less and watch more television
  • Now doing EMA-PATS study + biological info (Krueger and Stone)
    • Check EMA and PATS are related (strongly correlated ~ 0.94 corrected for pain, and 0.92 corrected for happiness)
    • Not a representative sample (v. hard to get participants)
  • A world of pain — use Gallup survey to look at pain across countries
    • Strong connection of GDP per capita and pain (~ -0.42 correlation)
  • Questions:
    • Why SES-Pain gradient and Age-Pain gradient? Many possible explanations
    • Source/duration of pain
    • Biomarkers

Knott and Scott

  • Examples of kids with cerebral palsy and some other bad thing: expectations matter (despite having serious disabilities kids evaluated their life as as good as others)
  • Support only: no effect
    • Homestart: no effect or negative!
    • Surestart: also been shown not to work
  • Skills and support: slightly better
  • Child Antisocial behaviour: benefits
  • Quality of mental health professionals: matters a lot
  • Very little long-term follow-up data
    • Perry pre-school: good effects at age 27
    • 10 years follow-up of Scott et al (2001) finds some long-term effects
  • More evidence based psychiatry
    • Quite a lot we can do if we do it in a skillful way

Well-being and Aging by Felicia Huppert

  • Negative stereotyping has large impact
    • Older people exposed to -ve stereotypes do worse on stress, cognitive performance etc
  • Causes of well-being
    • Separate +ve and -ve in GHQ (found a big difference in impact of e.g. unemployment on +ve vs -ve affect)
    • Magnitudes (as opposed to pure significant)
    • What are important drivers
  • Environmental affects likely to be large (much larger than genetic affects)
  • Study in US IT company: RCT of mindfulness meditation found substantial impact
  • How much is society losing from people not flourishing [ed: losing seems to mean losing money/GDP here]

Work, Stress and Well-being by Richard Freeman

  • Questions
    1. Does working environment affect worker well-being
    2. Can we specify workplace policies/practices make work lives better
    3. Do measures of job satisfaction and well-being provide different information
    4. Moving beyond survey measures
  • Job satisfaction – one of most widely studied variables
    • Correlated with health and turnover (people leaving associated with dissatisfaction)
    • Two-factor model needed to explain some patterns
      • Puzzle: unionized workers quit less but also less satisfied (expectations?)
    • Job satisfaction and well-being
    • When people quit and go to a new job their satisfaction goes up
  • Results from various datasets they used (WERS – several people per workplace + a lot of detail)
    1. Working environment matters a lot (could be workplace policy, culture, or selectivity)
    • Workplaces bad (good) in one dimension often bad (good) in others
    • [ed: so not some simple trade-off/optimization]
    • Large changes in well-being after quitting and moving elsewhere (bigger than money impact)
      1. Policy/practices matter but causality unclear
    • Major endogeneity problems (if i have a job satisfaction policy is that because people are miserable)
    • Well-being related with job attributes (hazardous, stress etc) in normal way
      1. Job satisfaction and worker well-being
    • Not that correlated
    • Job-satisfaction important for well-being but less important that health and various other variables
  • 3 things to do
    1. Biomarkers at high/low satisfaction workplaces
    2. Impact of change of jobs
    3. Harvard network on work, family and health
    • Check company work policy carefully
    • Look at health outcomes, stress, sleep
    • Found big correlation of manager’s attitudes and practices correlated with cardiovascular outcomes

Co-operation and well-being by Armin Falk and David Skuse

David Skuse (development neuro-psychiatrist)

  • Individual differences in happiness
  • Role of genes and brain on behaviour
  • Mechanisms of mental functioning underlying mental health
  • Compensation for deficiencies …

Armin Falk

  • [ed: computer battery ran out so this is very partial]
  • Relative pay and fMRI results. Big impact of relative pay (Science 2007)
  • Unfairness in principal agent setup (dull task and unfair division of revenue. impact on heart rate variability)
  • Oxytocin study: look at genetic variations affecting oxytocin and see how they impact on trust in trust game (amount sent at stage 1)
  • Mentioned current/future research on cultural formation on preferences

European Policy for Intellectual Property (EPIP) Conference 2008

Last Friday and Saturday I was at the 2008 European Policy for Intellectual Property (EPIP) conference, held this year in Bern. I presented my paper on the optimal term of copyright and discussed a paper of Luca Spinesi’s on ‘Imperfect IPR enforcement, inequality, and growth’. Below can be found ‘impressionistic’ notes from some of the other sessions I had a chance to attend.

Jim Bessen: How can and how should economics inform patent policy?

  • What is aim of ‘Property Rights’
  • Look at example of tradable permits for pollution
    1. Do institutions do their jobs
    2. Resources (is air cleaner)
    3. Social welfare
  • For patent system, thanks to recent work, first two are within our reach (though not within our grasp)
  • Institutions. Want:
    1. Specificity
    2. Searchability
    3. Predictability
    4. Transactability
    5. Enforceability
  • Patent system is not doing so well
    1. Specify: reasonable but lots of debate about what claims mean (40% overturn rate on appeal of district court decision re. claim construction)
    2. Search: pretty poor (esp. in ICT). Many firms do not bother to search.
    3. Predictability: low (e.g. no defense insurance)
    4. Transact: can be anti-commons
    5. Enforce: pretty unpredictable
  • Resources (Innovation)
    • Patent system is not doing so well due to overlapping claim (pooling problem)
    • Fuzzy boundaries: dispute costs
      • Value patents (upper bound from renewal, re-assignment, int’l filings, firm market value, surveys, case-studies)
      • Dispute costs (lower bound)
      • For pharma: value ~ $12 billion/year, costs ~ $1 billion
      • Other industries: value ~ $2 billion/year (from 80s to present), costs ~ $1 billion / year up until mid 90s since when they have spiked and now much higher than value — e.g. in late 90s costs 3x value
      • Could use fees to address this (raise from ~$5000 to ~$30000)

Reto Hilty: Enforcement of intellectual property rights on Enforcement of IPRs

  • Huge figures circulate about losses from piracy
    • Most figures are (very) dubious and produced by the industry
  • History of IPRED (and IPRED2)
  • More intl stuff:
    • TRIPS+
    • FTAs (US)
    • EPAs (EU)
    • ACTA
  • Why has this focus on enforcement happened
    • General mantra that strengthening IP rights is good for innovation
    • Patents: probably have over-protection
      • Full patent protection (EPC 1973) — i.e. patent covers subsequent uses even if not anticipated. (probably a mistake)
      • Biological substances — full patent protection particularly problematic
      • Software patents …
      • Drugs and developing countries
    • Copyright law
      • Internet users see constriction not justice
      • Entertainment + TPMs — “unjustified profits”
      • Scientific research: unnecessary constrictions (Open Access)
    • Industrial design
    • Trade-mark law — large extensions in the last 80s (protection of colours, shapes unjustified)
    • Eventually this constant extension generated such opposition that it is now at a standstill
    • Thus, rightsholders move focus to enforcement (focus on ‘efficiency’)
  • But stronger enforcement also causes problems [ed: the strength of a right in fact is is product of enforcement and strength in theory]
    • will there be a backlash?
  • Also extension of IP geographically — esp. to developing countries
  • What justifications are there for IP enforcement
    • IPR not valuable without some enforcement, certainty …
  • One size cannot fit all: whether for IP itself or for enforcement
    • If IPR is misused enforcement can make things worse
  • Suggestions:
    • Decriminalize where too much IP protection
    • Strengthen enforcement where IP truly detrimental
    • Distinguish IP protection from consumer protection (counterfeiting not the same as IP protection)
    • [ed: one concern here is that it seems here we are using enforcement/non-enforcement to correct IP rights which are themselves wrong -- enforce where good, don't enforce where not good. But if that were agreed why couldn't we correct the underlying problem]

Davis, Davis and Hoisl: Leisure time invention

  • PatVal data (10.5k German patents sampled with survey of inventors)
  • Leisure time has +ve impact on inventive output
  • Leisure time invention +vely linked to interactions with co-workers and outsiders
  • More leisure time invention in conceptual-based technologies rather than science-based technologies
  • Incidence of leisure time invention will be -vely related to project size
  • Most hypotheses confirmed

Ashish Arora: Patents and Innovation

  • Evidence for benefits of patents on innovation is mixed
    • Example of early Swiss and German dye and chemical industries
    • Surveys main evidence which show there are rents from patents but with equivalent subsidy ratio that is not that high
  • Kyle and McGahan: no inducement of research in diseases of poor countries after TRIPs
    • Even if patent protection is important no reason for developing countries to have them (already have protection in developed countries)
  • Thickets, patent litigation and trolls
    • Cockburn MacGarvie and Mueller (2008): fragmentation increasing across all industries
    • Substantial litigation costs
    • Geraldin, … find no thicket problem in 3G telephony
  • Anti-commons
    • Completely unpersuaded by the evidence
    • All examples came from universities: US research universities have made a mess of tech-transfer and patenting, alienating faculty and angering corporate partners (Bayh-Dole has had significant unintended bad consequences)
  • Markets for technology (specialization)
    • The first order effect of patents may be on trade in technology
    • Having people whose business it is to sell technology is really important (particularly if you are a developing country)
    • Licensing flows in US: $66 billion in 2006 (Carol Robbins). Good proportion of domestic R&D
    • Hall and Ziedonis evidence on specialist semiconductor firms
    • Gambardella and Giarratana (2007): software security patents
  • Making patents more useful
    • Much of the problem is bad patents due to:
      1. Invention is poorly understood (underlying knowledge base is poor)
      2. The claims are written with the intent of claiming as much while revealing as little as poorly understood
    • ‘Metes and bounds’ of the patent are unclear to all except handful of patent lawyers
    • Not new: cf. German chemical industry back in 19th century
    • Solution:
      1. Force patents to be written using (i) standard terms (ii) without legal jargon (whose only justification is a futile reach for precision)
      2. Patents should be (i) published expeditiously (ii) transactions (licenses, assignments, beneficial interests) in patents should be recorded and disclosed

Survey on Patent Licensing: Dominique Guellec (OECD)

  • Why licensing out:
    • Value from unused inventions
    • Inventions with applications elsewhere
    • Fabless firms
    • Establishing technology as a standard (may raise Competition issues)
    • Cross-licensing deals (ditto)
  • Expected Economics Effects (+ve)
    • Increases diffusion
    • Reduces duplication
    • Boost downstream competition
    • Facilitates specialization
  • Can also be -ve (mirror image of +ve ones e.g. reduced duplication = less competition)
  • Graph showing huge increase in royalty/license payments since mid 80s: ~$10B/year to ~$110B/year) (source: world bank)
    • But how much of this real (i.e. not tax manipulation etc) — and also includes copyright etc
  • OECD survey implemented by EPO by JPO/University of Japan on licensing behaviour
    • focuses on licensing out
    • response rate: 42% in europe, 34% in japan [ed: japan responses are less reliable for reasons not entirely clear to me]
    • no questions on revenues (people don’t respond when you ask this — either don’t know or don’t what to tell)
  • Results:
    • 35% of european companies license out, 59% of japanese firms
    • Licensing to non-affiliated companies: 20% of Eur, 27% of Japanese
    • U-shaped prob of licensing as a function of size
    • By tech field: highest in chemistry and electronics
    • Younger companies do it more (controlling for size) [ed: issues here though. Old firms which are small are not the same as young firms that are small]
    • Why do it?
      • Earning revenue: 60% EUR, 52% JPN; cross-licensing: 18%, 18%
    • Patents you would have licensed but could not/did not: ~20%
      • Why? Difficulty of finding a partner (25% of EUR and 18% of JPN)
      • Not important: problems of drafting contracts or technology not mature
  • Difficulty of finding partners could be for several reasons but suggests could be role for more/better intermediaries to facilitate transactions (INPIT in Japan)

Patent Thickets and the Market for Ideas: Mark Schankerman (LSE)

  • Market for ideas (patent licensing and sale of patents) [ed: this is obviously not the whole market for ideas ...]
  • Study market though new lens: settlement of patent infringement disputes
    • Do not know whether when settlements happen licensing actually occurs
  • Focus on 2 key aspects:
    • Fragmentation of rights (‘patent thickets’)
    • Certainty of enforcement (CAFC led to more certainty — not worrying here about pro-patent bias)
  • Fragmentation:
    • Trad story: bad (higher transaction costs, bargaining failure …)
    • Dissenting voice (Lichtman 2006): greater fragmentation lowers the value at stake in each negotiation and this reduces the incentive to bargain hard. This speeds up settlement. Of course still leaves question of whether this reduces total negotiation time.
  • Model gives us various hypotheses:
    • H1: more complementarity means longer negotiation
    • H2: more fragmentation means shorter negotiations
    • H3: Settlement negotiations will be shorter for patents litigated after CAFC (1982)
    • H4: Impact of fragmentation external rights will be lower after the introduction of CAFC
    • H5: CAFC has a bigger impact where the preceding circuit had more uncertainty
  • Results
    • More fragmentation: leads to lower dispute duration (19.6 months for < 50th percentile frag vs. ~16 months for > 90th percentile)
    • CAFC has a big effect on dispute duration (~33 months to ~18months)
  • Conclusion: looking at delay (not royalty stacking on other issues)
    • Certainty: good
    • Fragementation: not bad (and maybe good)

CCRP Summer Workshop 2008

City University’s Centre for Competition and Regulatory Policy summer workshop took place today and yesterday and I was there to present The Control of Porting in Platform Markets. As well as presenting I had the chance to take some ‘impressionistic’ notes on some of talks which are included below.

Thursday

Session 1: Telecoms and Postal Services

PAUL SMITH – CEPA: Defining the universal postal service

CARLO REGGIANI – UNIVERSITY OF YORK: Network neutrality and non-discriminatory issues: An economic analysis

  • 2 recent papers (2007): Economides and Yal + Yermelo and Katz
  • 2 sided-model
  • n-firms providing platform (telecoms)
  • network externalities both sides
  • Questions:
    • do telcos set prices on both side
    • What is form of the competition
    • net neutrality is always bad so why used

Session 2: Competition issues

RUFUS POLLOCK – CAMBRIDGE UNIVERSITY: The control of porting in two-sided markets

DAVID GILL – UNIVERSITY OF SOUTHAMPTON (with John Thanassoulis): The impact of bargaining on markets with price takers: Too many bargainers spoil the broth

  • What happens if some consumers bargain a discount from list prices
  • Some proportion of consumers z do not bargain
    • exogenous but endogenized later on
    • Cournot competition for these guys (with Bertrand this all goes wrong …)
  • Of those that do bargain some get one quote some get multiple (Bertrand from multiple)
    • trade-off getting monopoly from single quote guys vs. purchase from multi-quoters
    • From Judd + Burnett 1983
    • [ed: is there a cost for getting quotes]
    • [ed: Very like Baye and Morgan and resulting in similar mixing results)
  • Firms anticipate that higher list prices raises profits from bargainers
  • So as number of bargainers go up firms raise list prices
  • Results
    • As bargainers prop. increase price-takers do worse
      • Waterbed effect + fact that
    • Existing bargainers CS decreases as prop. bargainers rises
    • Swapping consumers (price-taking to bargaining) benefit
    • Overall effect: ambiguous
      • Overall negative and most bargainers get only one quote
      • Overall positive if most bargainers get multiple quotes
  • Then endogenize number of bargainers by assuming some intermediate types who face cost c of bargaining

    • Results similar
    • Still do not endogenize choice of number of quotes -- discussed in paper but not done
  • Comments:

    • Waterbed effect: what if firm entry (i.e. zero profit condition) then better prices for bargainers => worse prices for price-takers
    • Baye, Morgan

Session 3: Electricity and Related Issues

GERT BRUNEKREEFT – JACOBS UNIVERSITY BREMEN: Ownership unbundling of the German electricity TSOs – A social cost benefit analysis

VINCENT RIOUS – SUPELEC (with Jean-Michel Glachant, Yannick Perez and Philippe Dessante): The diversity of design of TSOs

STEPHEN WOODHOUSE - POYRY: Wind generation – no limits?

  • Everyone is signing up to incredibly optimistic renewable and CO2 targets.
  • For UK wind is essential as we have a lot of it compared to any other renewable options
  • However wind has major delivery issues and conventional wisdom is that its max penetration is 10%
  • Problem is:

    • wind can be irregular
    • (more significant) demand shows pronounced fluctuations over the day while renewables don't (on average). This means that your backup capacity to deal with peak load make renewables on avg. v. expensive.
  • [ed]: Comments

    • why this debate about whether feasible or not — why can’t we simply price carbon efficiently
    • like a man who has a dislocated shoulder and spends all his time trying to fix the pain this causes in his hip rather than sorting out his shoulder

Session 4: Evaluation of Regulation and Competition Institutions

GORDON HUGHES – UNIVERSITY OF EDINBURGH: Efficiency frontiers, stranded assets and the X-factor for telecoms network operators

  • Setting the X in RPI – X
  • Stochastic frontier analysis
  • Look at 68 US local exchange carriers (data from FCC)
    • Current costs from historic accounts
    • Stranded assets (from switch to digital)
  • Structural break in 2000
    • To 1999 costs falling at -3.3%. From 2000 falling at -2.1%
  • Stranded assets affect costs: cumulative impact of 5% annual decline in switched line equivalent to a cost increase of ~2.6% per year
  • Slow convergence towards frontier: ~1.3% per year
  • Accounting vs. economic cost important
    • Accounting cost: ~ -1.7% per year (post 2000)
    • Economic cost: ~ 1.7% per year (post 2000)
  • RPI-X:
    • using accounting costs: X ~ 1.5-2.5%
    • using economic costs: X ~ -0.5 – 2.0% (i.e. -ve and prices rise faster than inflation)
    • In europe can justify + ~2.5% to X but this will all over time.

JOHN CUBBIN – CITY UNIVERSITY (with Jon Stern, Federica Maiorano and William Gboney): What can we learn from economic studies of infrastructure regulatory policies?

Friday

Session 1: Transport

ANNE YVRANDE-BILLON – UNIVERSITY PARIS SORBONNE (with Miguel Amaral and Stephane Saussier): Does competition for the field improve cost efficiency? Evidence form the London bus tendering model

  • Competition for market
  • Idea is that competition raises bids (whether charges for providing service or payment for right to run it)
  • Little empirical testing
  • Several confounding factors
    • Winner’s curse: can happen in common-value and in private value auctions if bidders systematically under-estimate their own costs (i.e. over-estimate their own values)
    • Renegotiation effect: a bid not be allowed if not good enough even if it wins (implies more aggressive bidding)
    • Entry effect: Larger number of expected bidders might discourage entry.
  • Existing papers on impact of no. of bidders on outcome
    • Branman et al (1987), Thiel (1988), Dalen + Gomez-Lobo (2001), Hong + Shum (2002) — find strong winner’s curse, Nunez + Athias (2006)
    • Do not control for other extra factors
  • France vs. London (Amaral, Saussier + Yvrande 2008)
    • French Urban Public Transport sector
    • declining productivity, huge deficit — basically a disaster
    • tendering model (for buses):
      • No clear selection criterion (intuitu personae) — right enshrined in law by vague definition of the ‘collective welfare’
      • No regulator
      • Few bidders (av 1.4)
      • 66% of auctions with only one bidder
      • Incumbent advantage (~88% renewed)
      • Collusion (fined by Comp. Commission 2005)
    • Bus auctions in France are for complete networks while for UK they are for routes
    • This excludes Paris as Paris directly administered
  • UK model
    • Bus operation auctioned on route-by-route basis
    • Bids are annual price for service provision
      • Revenues occur to authority — so service provider has no demand risk (just ‘industrial’ risk)
    • Selection criterion ‘best economic value’ but:
      • Qualitative factors count (e.g. reputation, quality)
      • Discretionary power of the regulator (TfL) — may not select the lowest bidder if a) do not think firm can deliver b) would result in more than 20% market share c) …
      • A public benchmark exists (what was the old public operator)
    • Auction format: combinatorial first price auction. Aims to:
      • Encourage participation of small operators by unbundling the network
      • Benefit from coordination and scale and scope via package format
  • Regarding initial concerns:
    • These are private value auctions so less risk of winner’s curse
      • Cantillon + Pesendorfer (2006): private information about opportunity costs
      • [ed: not sure here. would seem likely that there is a strong common component here]
    • No renegotiation of contracts: short term contracts and strong regulator
  • Dataset: all auctions between March 2003 and May 2006 (294 individual routes)
    • all on the regulator’s website!
    • other information about the transport network
  • Summary info:
    • Constant over time (unlike France)
    • Around avg 3 bidders per auction
    • Only 20% of auctions have one bidder
  • Basic regression:
    • Av cost per mile (cpm) does decline with number of bidders
    • But clear endogeneity problem as av. bus miles correlated with number of bidders and costs
    • Deal with this by using predicted number of bidders based on number of operators in the vicinity of the route in the previous period.
    • However correlating actual and predicted number of bidders find -ve correlation (suggests people enter (and bid high) when the number of expected bidders is low and vice-versa)
      • confirms endogeneity of entry
  • Results:
    • N effects bids in the way we would expect
    • Competition effect larger than (deterred-)entry effect
  • Discussant comments:
    • Still carry some demand risk because demand may impact on cost of operation
    • Data on congestion would be useful

ALBERTO GAGGERO- UNIVERSITY OF ESSEX (with Claudio Piga): Pricing and competition on the UK- Irish aviation market

  • Background
    • UK-Irish aviation market is largely dominated by Aer-Lingus (EI) and Ryanair (FR)
    • Ryanair launched takeover in 2006 but was blocked in 2007
  • Test whether there is impact of competition
    • Plus a study with European data (most from US)
  • Data
    • 84k flights
    • EI: 30%, FR: 55%, next biggest 10%
    • ~ 25 routes
    • Using web spider have full fares dataset
    • CAA: available seats, sold seats, flight frequency (aggregated)
    • Distance in km between 2 endpoints
  • Put in most variables you could think of
  • Endogeneity issues:
    • pricing and market structure may be simultaneously determined so do IV
    • IV approaches mostly based on the fact that the more likely one serves both ends of a route the more likely one serves that route
  • Results:
    • (Surprisingly) market shares variables go wrong way (higher market share lower prices)
    • This holds with IVs or without
    • Different IVs do affect size of negativity but do not change the sign
    • (Route) market share up 1% reduces fares by 0.19% (Borenstein IVs) or 0.5% (their own IVs)
    • Check robustness (e.g. pooling all London airports)
    • All other regressors economically and statistically significant and of right sign

Session 2: Finance

ENRIQUE BENITO – FSA: Size, growth and bank dynamics

  • Background
    • Banking in Europe has changed a lot (lots of deregulation)
    • Size in banking is important
    • Little examination of size of banks
    • General increase in concentration (more big banks)
    • Data on Spanish banks 1970-2006
  • Traditional literature:
    • X-section regression to explain current sizes as function of underlying factors (and hence trends over time in size and concentration driven by these underlying factors)
  • Here focus on classic Gibratian stochastic growth process (LPE)
    • S(i,t) = S(i,t)^beta exp(mu(i,t))
    • mu(i,t) = N(alpha(i) + delta(t), sigma)
  • Predictions from LPE
    • P1: beta = 1
    • P2: No persistence in growth across periods (no correlation across periods)
    • P3: Variability of growth rates is independent of size
  • If these hold (strong all 3, weak just P1) then growth rates of banks follow random walk with drift
  • Data
    • Annual data for all banks
    • Reliable data maybe from 1980 so do everything both 1970-2006 and 1980-2006
    • Include firms that exit plus mergers [ed: not quite sure how they deal with mergers exactly]
  • Results:
    • Beta less than 1 (significantly but not by much). Some (IMO) weak evidence that is has increased a little bit in more recent periods
    • Rho (measure of convergence) is significantly above 1 (which implies previous periods growth predicts growth today)
    • Heteroscedascity: yes (size matters)
    • Variability of growth: larger banks have more stable growth
    • So reject LPE over whole period but may be converging towards it over time
  • Conclusion:
    • Size-growth relationships change over time
    • Converging towards LPE => more skewed size distribution in future (more concentration)

KAI KOHLBERGER – FSA (with Richard Johnson): Has MCOB regulation affected the suitability of subprime mortgage sales?

  • Did introduction of new regulations (MCOB) affect mis-selling
    • Mortgages should be suitable (explicit defn)
  • Approach
    • Look at arrears rate 12 months after sale
  • Data
    • 15 firms, 590k mortgages
    • Regressions with 300k observations (due to missing values — check this is not systematic)
    • FSA Product sales database (PSD)
    • Macro vars
    • Subprime defined as in PSD
  • Find no impact on arrears rates discernible from policy change

FLOSS 2008 Workshop on Free/Open Source Software

Last week I attended FLOSS 2008, the second international workshop/network meeting on FLOSS (Free/Libre/Open Source software) in Rennes, France. I was presenting my paper Innovation and Imitation with and without Intellectual Property Rights (and would have offered discussant comments but the author of the paper I was scheduled to discuss had to pull out at the last minute). In addition to this I got to hear a variety of interesting talks. On some of these I was able to take notes which I have included below for the ‘delectation’ of anyone else who is interested.

Mikko Valimaki: IPR and Open Source Software

  • Goodman and Myers (2005) — the 3G standard.
  • Leveque and Meniere 2007: what does RAND mean
    • reasonable royalty is R = c (v1-v2)p where c is incremental costs of licensing, v1-v2 is gain from using this pattern over second-best.
  • Other questions for royalty-setting
    • quality of volume of patents
    • early or late innovators
    • cumulative royalties or one-time fees
  • But all models he knows of have non-zero royalty fees
    • [ed]: not surprising given that you will always get interior solutions
  • Windows/Samba discussion
    • specific sets of terms
    • provide RF for the open source community
  • Commission Decision para 783
    • “On balance, the possible negative impact of an order to supply on Microsoft’s incentives to innovate is outweighed by its positive impact on the level of innovation of the whole industry.”
  • Nokia to acquire Symbian:
    • “a full platform will be available … under a royalty-free license … from the Foundation’s first day of operations … the Foundation will make selected components available as open source at launch.”
    • [ed]: Motivation here is clear: Nokia care about the hardware and for them software is a complementary good — which they therefore wish to be as cheap as possible. But this raises question as to what is being made open: is hardware patents or pure software patents (and if so how big a deal is this)

Stefan Koch: Efficiency of FLOSS Production

  • Question of efficiency of open source development
  • How much software did we get for our effort
    • Is OS a waste of resources?
  • Discussion without much empirical basis
    • Claim: fast and cheap, high quality, finding bugs late is inefficient (actually large effort) — see IEEE Software 1999
  • Completely unknown as no-one keeps time-sheets. So
    • Effort based on participation data
    • Effort based on product — look at software and ask how much effort would be needed in commercial environment
  • Empirical research in open source
    • Mainly case studies
    • Helpful but need proper large-scale analysis
  • Mined software repositories [ed: cf. today FLOSSMatrix, FLOSSMore]
    • 8,261 projects
    • 7,734,082 commits
    • 663M LOCs
    • resources and output is skewed: top decile of programmers: 79% of code base, second decile: 11%
  • Effort estimation based on actual participation
    • active programmer months (define active as committing in a given month)
    • high correlation with LOC added in month
  • Cumulate this number for each project
    • But not equal to a commercial person-month
    • How do we scale: use 18.4 h/w taken from stats for committers on Linux kernel
    • [ed:] this is the key assumption. The whole point is that FLOSS effort is not observed and they are using a measure of output (committing) and trying to infer actually activity
  • Manpower function modelling:
    • Norden-Rayleigh model (1960)
    • Some set of problems N (unknown but finite)
    • Probs are solved independently and randomly (following Poisson)
    • This fits ok but has eventual decline in participation which does not occur
    • Modify this: in particular to allow introduction of new problems
      • Introduce in prop to original no. problems, in prop to current set of problems etc
      • Also have different learning rates
      • [ed: but isn't the setup a little different. Really it is a question of success vs. non-success in terms of acquiring users + some kind of bound on amount of participation due either to fission or complexity]
  • Product-based estimation
    • COCOMO 81 and COCOMO 2
  • Results:
    • Comparison COCOMO – Norden-Rayleigh
    • For COCOMO 81 cannot find parameters favourable enough to explain Norden-Rayleigh curve
    • For COCOMO 2 can find parameters but very favourable
    • Suggest (roughly) that FLOSS very efficient (but not very rigorous)
  • More formal estimation using all models etc
    • Norden-Rayleigh significantly below prodcut-based estimates (factor of 8 in mean)
  • Interpretation
    • FLOSS v. efficient (self-selection for tasks etc)
    • Extremely high amount of non-programmer participation (1:7 relation …)
  • [ed]: not sure about this generous view. Other explanations
    • No quality measurement (also mentioned by Koch)
      • OK: lot of code but low quality
    • (Related) Many sourceforge projects are incomplete, easy bit at the start
      • Later comes a lot of refactoring/writing documentation. This may display significant diminishing returns
    • Many FLOSS projects come from what were originally commercial projects. In that case:
      • code may have already been written
      • conceptual components have been done already
    • Trade-off of time vs. productivity
      • May be more productive to only work 10h a week but then product might not be ready for 10 years
  • Form discussion
    • interesting point: Nokia thinking of moving to more FLOSS in-house because they can’t manage their 5-10k programmers centrally any more

Mickael Vicente: Shift to Competences Model: A Social Network Analysis of Open Source Professional Developers

  • Robles 20007
    • Statistics on Debian showing increasing corporate involvement
  • Social network extraction
    • Get repo logs
    • Create link between 2 developers if they have committed on the same file (non-directed graph)
      • Simplification: the best collaboration of each developer (directed graph) — pick other developer with whom they have committed most files in common
    • Longitudinal analysis
      • extract clusters
  • Correlation with professional career
    • CV collected on Internet, personal web page etc (96% collected)
  • Interesting data

Nicholas Radtke: What Makes FLOSS Projects Successful: An Agent-Based Model of FLOSS Projects

  • Positive Characteristics of FLOSS
    • High quality (Low defect count: Chelf 2006)
    • Rapid development
    • Violates Brooks law (Rossi 2004)
    • Risky Business
  • for every successful FLOSS project there are dozens of unsuccessful projects
  • Corporate IT manager survey (2002)
    • 41% mention inability to hold someone responsible for software
  • Attempts at Simulating FLOSS
    • SimCode (Dalle and David 2004)
    • OSsim (Waggstrom et al 2005)
    • K-Means stuff
  • Simulate across landscape
    • Not social network
    • Focus on developer decision to join/contribute to projects (Agent-Based Modelling)
  • Defining Success and Failure
    • Traditional metrics do not work well (on budget?)
    • Completion (Crowston et al. 2003)
    • Progression through maturity stages (Crowston and Scozzi 2002)
    • Number of developers
    • Mailing list activity
    • Project outdegree, Active developer count (Wang 2007)
  • The Model Universe
    • Agents and projects
    • Agents:
      • Consumption: 0-1
      • Producer: 0-1
      • Resource: 0-1.5 (1=40h)
      • Memory: agents only aware of some subset of projects
      • Needs vector (preferences)
      • utility: linear sum of: similarity match + current popularity (current resources) + cumulative resources + download + f(maturity)
    • Projects:
      • resources needed
      • current resources
      • cumulative resources
      • download count
      • preferences: same as agent but converges towards those had by agents working on it
  • Agents choose between projects each time period
    • have some randomness in that use multinomial logit: prob choose project i ~ exp(mu * Utility of project i)
  • Results
    • Simulate over 250 time steps ~ 4 years
    • calibrate [ed: in a way I was not quite clear about]
    • compare simulation with empirical data from sourceforge
      • developers per project
      • projects per developer
    • Find that (from simulation data) downloads and cumulative resources are not important

Fabio Manenti: Dual Licensing in Open Source Software Markets

  • Benefits of Going Open Source
    • feedback from community
    • network effects (usage)
    • competitive pressures (e.g. Netscape) [ed: not sure this is a benefit]
  • Dual-licensing
    • Kosky (2007): 6% of representative sampl of European OSS business firms employ DL strategies

Alexia Gaudeul: Blogs and the Economics of Reciprocal (In-)Attention

  • What blogs are
  • Reasons for blogging
  • Question: do you befriend (link) because of content produced or do you produce content because of friends
  • General points
    • Market interactions only part of wider class of reciprocal relations
    • Time vs. money economics
    • Unique dataset, very detailed and complete, to test networked relations
  • Model — but left out due to time
  • Dataset: livejournal 2006
    • Sociology: teenagers to young adults (15 to 23), female (67\%), Americans (70\%)
    • Fast growth: created in 1999, 8M accounts, 1.3M active
    • FLOSS but for-profit (SaaS)
    • Great part from self-referential
    • Lively: 4 comments per post on average
    • Federated by communities: no. of communities per person 15
    • Journals updated for more than 2 years on avg
    • 70\% have posted in last 2 months
    • No. of entries: 1 every 2 days
    • No. of friends: 50 avg
    • Balance between friends and friends of
    • Balance between comments received / made
  • Friendship patterns
    • May be balance but does not explain no. of friends of diff. individuals
    • Need to distinguish
      • Norm of reciprocity: more promiscuous bloggers accumulate friends
      • Content attractiveness
        1. Quality/freq. of posts
        2. Interactivity (comments per post)
  • Regressions
    • Reciprocity: No. blogs read (friend) = b * number of readers (friend of) + error
    • Activity: No. readers = cX + error — X = matrix of ind. variables
    • Endogeneity issues [ed: all over the place)
    • Regress: ln(Friends) = ln(Friend of) + ... (with instrumenting Friends Of on Activity so solve endogeneity issues)
      • Saturation around 400 friends seemingly (few with more)
    • Max no. of friendship when your no. friends = no. friends of (maybe)
      • A norm of reciprocity
    • Issues with endogeneity of activity (which was used to instrument friends of)

Sylvain Dejean

  • Does ICT lead to the Internet lead to a global village or a cyber-balkan
  • What leads to emergence of virtual commmunities
  • Is the heterogeneity of contributions an impediment to self-organize
  • How to manage virtual communities
  • Agent-based model:
    • Individuals defined by some characteristics
    • Herfindahl index measures degree of self-organization [ed: why self-organization]
    • Communities change via selection and variation

2008 International Industrial Organization Conference (IIOC)

After attending the IIOC conference last year I was back this weekend at the 2008 IIOC event which took place at Marymount University in Virginia. I presented the latest version of two of my papers: The Control of Porting in Two-Sided Markets and Forever Minus a Day? Theory and Empirics of Optimal Copyright Term.

I also provided discussant comments on Christopher Ellis’s and Wesley Wilson’s paper entitled Cartels, Price-Fixing, and Corporate Leniency Policy:What Doesn’t Kill Us Makes Us Stronger. In addition I include below some very partial notes on some of the sessions I attended — though activity in this regard was rather limited by the fact that, though there were more papers overall than last year (388 in total), sessions were organized into more breadth and less length.

Transaction Costs and Trolls: the Behaviour of Individual Inventors, Small Firms and Entrepreneurs in Patent Litigation (Gwendolyn Ball and Jay Kesan)

  • Explore settlements in relation to patents. Questions:
    • How often do settlements happen relative to litigation
    • Are small firm and entrepreneurs at a major disadvantage in defending their patents
    • Or do patent trolls' use the threaof litigation toextort’ payments
      • NTP vs. RIM ($612M)
      • Saffron vs. Boston Scientific ($412M to individual doctor who had an infringed heart stent patent)
    • Does nature of defendant/plaintiff (L/M/S) affect likelihood of settlement
  • Existing databases not so great
    • Only list trial outcomes not pre-trial outcomes
    • Often only list primary plaintiffs
    • Fix this and link patent litigation to companies
  • Results
    • Claimed usually that 95% cases settle
      • In fact 8% are resolved at pre-trial (still expensive)
      • 4% settled at trial
      • so ~ 88% settle
    • Troll stuff:
      • 97 licensing firms as plaintiffs (none as defendants). These may be classic trolls but they are a small part of overall litigation.
      • Evidence shows that entrepreneurs and small inventors are very active (so do not seem particularly disadvantaged) and often sue each other rather than larger firms
      • Crudely: small inventors more likely to pursue a case to the end than large litigators
  • Discussant comments:
    • Bessen and Meurer find $28M hit on firms facing litigation
    • Issues of correlated errors across cases
  • My comments:
    • probably need to disaggregate across areas — after all no-one has suggested ‘trolling’ is an issue in traditional pharma
    • (for me) it would be useful to have an idea how many cases ‘settle’ at the ‘letter stage’, that is, before anything even turns up in the court system. After all you only get to the courts (even with preliminaries) if you cannot sort out a license.

Prior Art – To Search or Not to Search (Vidya Atal)

  • Alcacer + Gittelman 2006 showed 40% had prior art added by USPTO examiner
  • 2/3 citations on an average patent added by USPTO
  • Langinier + Marcoul (2003), Lampe (2007) — incentive to disclose prior art
  • Issue of bad (non-novel) patents may be because people have poor incentives to search
  • Mainly related this to fact that even a bad patent (if it gets past examination) has a +ve payoff

What’s Wrong with Modern Macroeconomics

This January I met Alan Kirman at the Robinson Workshop on Rationality and Emotions. Over lunch we had a brief discussion about the difficulties of modern macroeconomics. I was therefore intrigued to see a new paper of his (co-authored with Peter Howitt, David Colander, Axel Leijonhufvud and Perry Mehrling) entitled Beyond DSGE Models: Towards an Empirically-Based Macroeconomics which was presented in January at the AEA conference (and looks like it will be appearing in the AER ‘Papers and Proceedings’).

The paper has much to say about the current state of macro, in particular the serious problems with DSGE (dynamic stochastic general equilibrium models) and where we should go from here. As the abstract puts it:

This paper argues that macro models should be as simple as possible, but not more so. Existing models are “more so” by far. It is time for the science of macro to step beyond representative agent, DSGE models and focus more on alternative heterogeneous agent macro models that take agent interaction, complexity, coordination problems and endogenous learning seriously. It further argues that as analytic work on these scientific models continues, policy-relevant models should be more empirically based; policy researchers should not approach the data with theoretical blinders on; instead, they should follow an engineering approach to policy analysis and let the data guide their choice of the relevant theory to apply.

It is worth quoting at some length from the paper in order to bring out the full ramifications of the story the authors tell:

Keynesianism Goes Wrong

With the development of macro econometric models in the 1950s, many of the Keynesian models were presented as having formal underpinnings of microeconomic theory and thus as providing a formal model of the macro economy. Specifically, IS/LM type models were too often presented as being “scientific” in this sense, rather than as the ad hoc engineering models that they were. Selective micro foundations were integrated into sectors of the models which give them the illusory appearance of being based on the axiomatic approach of General Equilibrium theory. This led to the economics of Keynes becoming separated from Keynesian economics.

The Reaction and a New Dawn (Rational Expectations and Neoclassical GE Models)

The exaggerated claims for the macro models of the 1960s led to a justifiable reaction by macroeconomists wanting to “do the science of macro right”, which meant bringing it up to the standards of rigor imposed by the General Equilibrium tradition. Thus, in the 1970s the formal modeling of macro in this spirit began, including work on the micro foundations of macroeconomics, construction of an explicit New Classical macroeconomic model, and the rational expectations approach. All of this work rightfully challenged the rigor of the previous work. The aim was to build a general equilibrium model of the macro economy based on explicit and fully formulated micro foundations.

But ‘Technical’ Difficulties Intervene

Given the difficulties inherent in such an approach, researchers started with a simple analytically tractable macro model which they hoped would be a stepping stone toward a more sensible macro model grounded in microfoundations. The problem is that the simple model was not susceptible to generalization, so the profession languished on the first step; and rational expectations representative agent models mysteriously became the only allowable modeling method. Moreover, such models were directly applied to policy even though they had little or no relevance. … [emphasis added]

But There Was a Reason For This: Other Stuff is Hard

The reason researchers clung to the rational expectations representative agent models for so long is not that they did not recognize their problems, but because of the analytical difficulties involved in moving beyond these models. Dropping the standard assumptions about agent rationality would complicate the already complicated models and abandoning the ad hoc representative agent assumption would leave them face to face with the difficulties raised by Sonnenschein, Mantel and Debreu. While the standard DSGE representative models may look daunting, it is the mathematical sophistication of the analysis and not the models themselves which are difficult. Conceptually, their technical difficulty pales in comparison to models with more realistic specifications: heterogeneous agents, statistical dynamics, multiple equilibria (or no equilibria), and endogenous learning. Yet, it is precisely such models that are needed if we are to start to capture the relevant intricacies of the macro economy.

Building more realistic models along these lines involves enormous work with little immediate payoff; one must either move beyond the extremely restrictive class of economic models to far more complicated analytic macro models, or one must replace the analytic modeling approach with virtual modeling. Happily, both changes are occurring; researchers are beginning to move on to models that attempt to deal with heterogeneous interacting agents, potential emergent macro properties, and behaviorally more varied and more realistic opportunistic agents. The papers in this session describe some of these new approaches. [emphasis added]

Some Closing Comments of My Own

So there you go: plenty of tough challenges and a big dose of humility. To some extent here it seems thing run on 30-40 years cycles: Keynesianism from 1945-1975, Rational Expectations DSGE from 1975-2005 and now we’re into the era of complexity and ‘loose’ tools with emphasis on empirics and heuristics rather than formal models. Whether this new approach will deliver more than the old is yet to be seen. After all, one reason that there are so many physicists getting interested in Economics and Finance is that the going is so hard in, e.g., condensed matter physics (superconductivity anyone …). If the economy really is so complex will we ever do any better at the macro scale than we do for the weather and if so will it not rely on some conceptual breakthrough rather than just doing using more hard-core dynamical systems theory and running more agent-based simulations?

That said, as the authors argue, the ‘simple’ route isn’t working and the hardness of the path is no reason not to attempt it — an argument in many ways directly inverse to the traditional ‘drunkard-and-the-lamp’ approach in which we restrict our models, often beyond the point in which they remain relevant, in order to maintain analytical tractability. Thus, though cautious regarding what more ‘complexity-oriented’ methods can deliver, I am in wholehearted agreement with the authors that they justify much greater exploration.