Category Archives: Policy

Creative Commons and the Commons

Background: I first got involved with Creative Commons (CC) in 2004 soon after its UK chapter started. Along with Damian Tambini, the then UK ‘project lead’ for CC, and the few other members of ‘CC UK’, I spent time working to promote CC and its licenses in the UK (and elsewhere). By mid-2007 I was no longer very actively involved and to most intents and purposes was no longer associated with the organization. I explain this to give some background to what follows.

Creative Commons as a brand has been fantastically successful and is now very widely recognized. While in many ways this success has been beneficial for those interested in free/open material it has also raised some issues that are worth highlighting.

Creative Commons is not a Commons

Ironically, despite its name, Creative Commons, or more precisely its licenses, do not produce a commons. The CC licenses are not mutually compatible, for example, material with a CC Attribution-Sharealike (by-sa) license cannot be intermixed with material licensed with any of the CC NonCommercial licenses (e.g. Attribution-NonCommercial, Attribution-Sharealike-Noncommercial).

Given that a) the majority of CC licenses in use are ‘non-commercial’ b) there is also large usage of ShareAlike (e.g. Wikipedia), this is an issue affects a large set of ‘Creative Commons’ material.

Unfortunately, the presence of the word ‘Commons’ in CC’s name and the prominence of ‘remix’ in the advocacy around CC tends to make people think, falsely, that all CC licenses as in some way similar or substitutable.

The ‘Brand’ versus the Licenses

More and more frequently I hear people say (or more significantly write) things like: “This material is CC-licensed”. But as just discussed there is large, and very significant, variation in the terms of the different CC licenses. It appears that for many people the overall ‘Brand’ dominates the actual specifics of the licenses.

This is in marked contrast to the Free/Open Source software community, where even in the case of the Free Software Foundation’s licenses people tend to specify the exact license they are talking about.

Standards and interoperability are what really matter for licenses (cf the “Commons” terminology). Licensing and rights discussions are pretty dull for most people — and should be. They are important only because they determine what you and I can and can’t do, and specifically what material you and I can ‘intermix’ — possible only where licenses are ‘interoperable’.

To put it the other way round: licenses are interoperable if you can intermix freely material licensed under one of those licenses with material licensed under another. This interoperability is crucial and it is, in license terms, what underlies a true commons.

More broadly we are interested in a ‘license standard’, in knowing, not only that a set of licenses are interoperable, but that they all allow certain things, for example for anyone to use, reuse and redistribute the licensed material (or to put in terms of freedom, that they guarantee those freedoms to users). This very need for a standard is why we created the Open Definition for content and data building directly on the work on a similar standard (the Open Source Definition) in the Free/Open Source software community.

The existence of non-commercial

CC took a crucial decision in including NonCommercial licenses in their suite. Given the ‘Brand’ success of Creative Commons the inclusion of NC licenses has been to give them a status close to, if not identical, with the truly open, commons-supporting, licenses in the CC suite.

This is a noticeable difference here with the software world, where NC is also active, but under the ‘freeware’ and ‘shareware’ names (these terms aren’t always used consistently), and with this material clearly distinguished from the Free/Open Source software community.

As the CC brand has grown, there is a desire by some individuals and institutions to use CC licenses simply because they are CC licenses (this is also encouraged by the baking in of CC licenses to many products and services). Faced with choosing a license, many people, and certainly many institutions, tend to go for the more restrictive option available (especially when the word commercial is in there — who wants to sanction exploitation for gain of their work by some third-party!). Thus, it is no surprise that non-commercial licenses appear to be by far the most popular.

Without the NC option, some of these people would have chosen one of the open CC licenses instead. Of course, some would not have licensed at all (or, at least not with a CC license), sticking with pure copyright or some other set of terms. Nevertheless, the benefit in gaining a clear dividing line, and in creating brand-pressure for a real commons, and real openness would have been substantial, and worth, in my opinion, the loss of the non-commercial option.

Structure and community

It is notable in the F/OSS community that most licenses, especially the most popular, are either not ‘owned’ by anyone (MIT/BSD) or are run by an organization with a strong community base (e.g. the Free Software Foundation). Creative Commons seem rather different. While there are public mailing lists ultimately decisions regarding the licenses, and about crucial features thereof such as compatibility with 3rd party licenses, remains with CC central based in San Francisco.

Originally, there was a fair amount of autonomy given to country projects but over time this autonomy has gradually been reduced (there are good reasons for this — such as a need for greater standardization across licenses). This has concrete affects for the terms in licenses.

For example, for v3.0 the Netherlands were requested to remove their provisions which included things like DB rights in their share-alike provision and instead standardize on a waiver for these additional rights (rights which are pretty important if you are doing data(base) licensing). Most crucially the CC licenses reserve the right to Creative Commons as an organization to determine compatibility decisions. This is arguably the single most important aspect of licensing, at least in respect of interoperability and the Commons.

Creative Commons and Data

Update: as September 2011 there has been further discussion between Open Data Commons and Creative Commons on these matters, especially regarding interoperability and Creative Commons v4.0.

From my first involvement in the ‘free/open’ area, I’d been interested in data licensing, both because of personal projects and requests from other people.

When first asked how to deal with this I’d recommended ‘modding’ a specific CC license (e.g. Attribution-Sharealike) to include provisions for data and data(bases). However, starting from 2006 there was a strong push from John Wilbanks, then at Science Commons but with the apparent backing of CC generally, against this practice as part of a general argument for ‘PD-only’ for data(bases) (with the associated implication that the existing CC licenses were content-only). While I respect John, I didn’t really agree with his arguments about PD-only and furthermore it was clear that there was a need in the community for open but non-PD licenses for data(bases).

In late 2007 I spoke with Jordan Hatcher and discovered about the work he and Charlotte Waelde were doing for Talis, to draft a new ‘open’ license for data(bases). I was delighted and started helping Jordan with these licenses — licenses that became the Open Data Commons PDDL and the ODbL. We sought input from CC during the drafting of these licenses, specifically the ODbL, but the primary response we had (from John Wilbanks and colleagues) was just “don’t do this”.

Once the ODbL was finalized we then contacted CC further about potential compatibility issues.

The initial response then was that, as CC did not recommend use of its licenses (other than CCZero) for data(bases), there should not be an issue since, as with CC licenses and software, there should be an ‘orthogonality’ of activity — CC licenses would license content, F/OSS licenses would license code, and data(base) licenses (such as the ODC ones) would license data. We pressed about this and had a phone con about this with Diane Peters and John Wilbanks in January 2010, with a follow-up email detailing the issues a bit later.

We’ve also explained on several occasions to senior members of CC central our desire to hear from CC on this issue and our willingness to look at ways to make any necessary amendments to ODC licenses (though obviously such changes would be conditional on full scrutiny by the Advisory Council and consultation with the community).

No response has been forthcoming. To this date, over a year later, we are yet to receive any response from CC despite having though we have now been promised a response at least 3 times (we’ve basically given up asking).

Further to this lack response, without any notice or discussion to ODC, CC recently put out a blog post in which they stated, in marked contrast to previous statements, that CC licenses were entirely suited to data. In many ways this is a welcome step (cf. my original efforts to use CC licenses for data above) but CC have made no statement about a) how they would seek to address data properly b) mention of the relationship of these efforts to existing work in Open Data Commons and especially re. the ODbL. One can only assume, at least in the latter case, that the omission was intentional.

All of this has led me, at least, to wonder what exactly CC’s aims are here. In particular, is CC genuinely concerned with interoperability (beyond a simple ‘everyone uses CC’) and the broader interests of the community who use and apply their licenses?

Conclusion

Creating a true commons for content and data is incredibly important (it’s one of the main things I work on day to day). Creative Commons have done amazing work in this area but as I outline above there is an important distinction between the (open) commons and CC licenses.

Many organisations, institutions, governments and individuals are currently making important decisions about licensing and legal tools – in relation to opening up everything from scientific information, to library catalogues to government data. CC could play an important role in the creation of an interoperable commons of open material. The open CC licenses (CC0, CC-BY and CC-BY-SA) are an important part of the legal toolbox which enables this commons.

I hope that CC will be willing to engage constructively with others in the ‘open’ community to promote licenses and standards which enable a true commons, particularly in relation to data where interoperability is especially crucial.

Papers on the Size and Value of EU Public Domain

I’ve just posted two new papers on the size of and ‘value’ the EU Public Domain. These papers are based on the research done as part of the Public Domain in Europe (EUPD) Research Project (which has now been submitted).

  • Summary Slides Covering Size and Value of the Public Domain – Talk at COMMUNIA in Feb 2010
  • The Size of the EU Public Domain

    This paper reports results from a large recent study of the public domain in the European Union. Based on a combination of catalogue and survey data our figures for the number of items (and works) in the public domain extend across a variety of media and provide one of the first quantitative estimates of the ‘size’ of the public domain in any jurisdiction. We find that for books and recordings the public domain is around 10-20% of published extant output and would consist of millions and hundreds of thousands of items respectively. For films the figure is dramatically lower (almost zero). We also establish some interesting figures relevant to the orphan works debate such as the number of catalogue entries without any identified author (approximately 10%).

  • The Value of the EU Public Domain

    This paper reports results from a large recent study of the public domain in the European Union. Based on a combination of catalogue, commercial and survey data we present detailed figures both on the prices (and price differences) of in copyright and public domain material and on the usage of that material. Combined with the estimates for the size of the EU public domain presented in the companion paper our results allow us to provide the first quantitative estimate for the `value’ of the public domain (i.e. welfare gains from its existence) in any jurisdiction. We also find clear, and statistically significant, differences between the prices of in-copyright and public-domain in the two areas which we have significant data: books and sounds recordings in the UK. Patterns of usage indicate a significant demand for public domain material but limitations of the data make it difficult to draw conclusions on the impact of entry into the public domain on demand.

The results on price differences are particularly striking, as to my knowledge, these are by far the largest analysis done to date. More significantly, they clearly show that the claim in the Commission’s impact assessment that there was no price effect of copyright (compared to the public domain) was wrong. That claim was central to the impact assessment and to the proposal to extend copyright term in sound recordings (a claim that was based on a single study using a very small size, performed by PwC as part of a music-industry sponsored piece of consultancy for submission to the Gowers review).

Public Sector Transparency Board

As announced on Friday on the UK Government’s data.gov.uk, I am one of the members of the UK Government’s newly formed Public Sector Transparency Board.

From the announcement:

The Public Sector Transparency Board, which was established by the Prime Minister, met yesterday for the first time.

The Board will drive forward the Government’s transparency agenda, making it a core part of all government business and ensuring that all Whitehall departments meet the new tight deadlines set for releasing key public datasets. In addition, it is responsible for setting open data standards across the whole public sector, listening to what the public wants and then driving through the opening up of the most needed data sets.

Chaired by Francis Maude, the Minister for the Cabinet Office, the other members of the Transparency Board are Sir Tim Berners-Lee, inventor of the World Wide Web, Professor Nigel Shadbolt from Southampton University, an expert on open data, Tom Steinberg, founder of mySociety, and Dr Rufus Pollock from Cambridge University, an economist who helped found the Open Knowledge Foundation.

In the words of Francis Maude:

“In just a few weeks this Government has published a whole range of data sets that have never been available to the public before. But we don’t want this to be about a few releases, we want transparency to become an absolutely core part of every bit of government business. That is why we have asked some of the country’s and the world’s greatest experts in this field to help us take this work forward quickly here in central government and across the whole of the public sector.”

UK Government Plans to Open Up Data

Yesterday, in a speech on “Building Britain’s Digital Future”, UK Prime Minister Gordon Brown announced wide-ranging plans to open up UK government data. In addition to a general promise to extend the existing commitments to “make public data public” the PM announced:

  • The opening up of a large and important set of transport data (the NaPTAN dataset)
  • A commitment to open up a significant amount of Ordnance Survey data from the 1st April (though details of which datasets not yet specified)
  • By the Autumn an online e-”domesday” book giving “an inventory of all non-personal datasets held by departments and arms-length bodies
  • A new “institute” for web science headed by Tim Berners-Lee and Nigel Shadbolt and with an initial £30m in funding

This speech is a significant indication of a further commitment to the “making public data public” policy announced in the Autumn.

It’s great to see this as, a year ago it seemed as if government policy was set to largely ignore the research in the Models of Public Sector Information Provision by Trading Funds report (authored by myself, David Newbery and Professor Bently back in 2008) whose basic conclusions was that that government data which was digital, bulk and ‘upstream’ should be made available at marginal cost.

More detailed excerpts (with emphasis added)

Opening up data

In January we launched data.gov.uk, a single, easy-to-use website to access public data. And even in the short space of time since then, the interest this initiative has attracted – globally – has been very striking. The site already has more than three thousand data sets available – and more are being added all the time. And in the past month the Office for National Statistics has opened up access for web developers to over two billion data items right down to local neighbourhood level.

The Department for Transport and the transport industry are today making available the core reference datasets that contain the precise names and co-ordinates of all 350 thousand bus stops, railway stations and airports in Britain.

Public transport timetables and real-time running information is currently owned by the operating companies. But we will work to free it up – and from today we will make it a condition of future franchises that this data will be made freely available.

And following the strong support in our recent consultation, I can confirm that from 1st April, we will be making a substantial package of information held by ordnance survey freely available to the public, without restrictions on re-use. Further details on the package and government’s response to the consultation will be published by the end of March.

e-Domesday Book

And I can also tell you today that in the autumn the Government will publish online an inventory of all non-personal datasets held by departments and arms-length bodies – a “domesday book” for the 21st century.

The programme will be managed by the National Archives and it will be overseen by a new open data board which will report on the first edition of the new domesday book by April next year. The Government will then produce its detailed proposals including how this work can be extended to the wider public sector.

To inform the continuing development of making public data public, the National Archives will produce a consultation paper on a definition of the “public task” for public data, to be published later this year.

The new domesday book will for the first time allow the public to access in one place information on each set of data including its size, source, format, content, timeliness, cost and quality. And there will be an expectation that departments will release each of these datasets, or account publicly for why they are not doing so.

Any business or individual will be free to embed this public data in their own websites, and to use it in creative ways within their own applications.

Mygov

So our goal is to replace this first generation of e-government with a much more interactive second generation form of digital engagement which we are calling Mygov.

Companies that use technology to interact with their users are positioning themselves for the future, and government must do likewise. Mygov marks the end of the one-size-fits-all, man-from-the-ministry-knows-best approach to public services.

Mygov will constitute a radical new model for how public services will be delivered and for how citizens engage with government – making interaction with government as easy as internet banking or online shopping. This open, personalised platform will allow us to deliver universal services that are also tailored to the needs of each individual; to move from top-down, monolithic websites broadcasting public service information in the hope that the people who need help will find it – to government on demand.

And rather than civil servants being the sole authors and editors, we will unleash data and content to the community to turn into applications that meet genuine needs. This does not require large-scale government IT Infrastructure; the ‘open source’ technology that will make it happen is freely available. All that is required is the will and willingness of the centre to give up control.

Policy Recommendations in the Area of Innovation, Creativity and IP

I was recently asked to put together a short document outlining my main policy recommendations in the area of “innovation, creativity and IP”. Below is what I prepared.

General IP Policy

Recommendation: IP policy, and more generally innovation policy, should aim at the improvement of the overall welfare of UK society and citizens and not just at promoting innovation and creativity

Innovation is, of course, a major factor in the improvement of societal welfare — but not the only factor, access to the fruits of that innovation is also important.

IP rights are monopolies and such monopolies when over-extended do harm rather than good. The provision of IP rights must balance the promotion of innovation and creativity with the need for adequate access to the results of those efforts both by consumers and those who would seek to innovate and create by building upon them. A policy which aims purely at maximizing innovation, via the use of IP rights, will almost certainly be detrimental to societal welfare, since it will ignore the negative consequences of extending IP on access to innovation and knowledge. As such, IP policy is about having “enough, but not too much”.

This basic point is often overlooked. To help minimize the risk of this occurring in future it is suggested that this basic purpose — of promoting the welfare of UK citizens — be explicitly embedded within the goals of organisations and departments tasked with handling policies related to innovation and IP.

Recommendation: Move away from a focus on intellectual property to look at innovation and information policy more widely

IP rights are but one tool for promoting innovation and often a rather limited one. The focus should be on the general problem — promoting societal welfare through innovation and access to innovation — not on one particular solution to that problem.

Provision and Pricing of Public Section Information

Background

Public sector information (PSI) is information held by a public sector organisation, for example a government department or, more generally, any entity which is majority owned and/or controlled by government. Classic examples, of public sector information in most countries would include, among many others: geospatial data, meteorological information and official statistics.

While much of the data or information used in our society is supplied from outside the public sector, compared to other parts of the economy, the public sector plays an unusually prominent role. In many key areas, a public sector organization may be the only, or one among very few, sources of the particular information it provides (e.g. for geospatial and meteorological information). As such, the policies adopted regarding maintenance, access and re-use of PSI can have a very significant impact on the economy and society more widely.

Funding for public sector information can come from three basic sources: government, ‘updaters’ (those who update or register information) and ‘users’ (those who want to access and use it). Policy-makers control the funding model by setting charges to external groups (‘updaters’ or ‘users’) and committing to make up any shortfall (or receive any surplus) that results. Much of the debate focuses on whether ‘users’ should pay charges sufficient to cover most costs (average cost pricing) or whether they should be given marginal cost access — which equates to free when the information is digital. However, this should not lead us to neglect the third source of funding via charges for ‘updates’.

Policy-makers must also to concern themselves with the regulatory structure in which public sector information holders operate. The need to provide government funding can raise major commitment questions while the fact that many public sector information holders are the sole source of the information they supply raise serious competition and efficiency issues.

Recommendation: Make digital, non-personal, upstream PSI available at marginal cost (zero)

The case for pricing public sector information to users at marginal cost (equal to zero for digital data) is very strong for a number of complementary reasons. First, the distortionary costs of average rather than marginal cost pricing are likely to be high. Second, the case for hard budget constraints to ensure efficient provision and induce innovative product development is weak. As such, digital upstream public sector information is best funded out of a combination of ‘updater’ fees and direct government contributions with users permitted free and open access. Appropriately managed and regulated, this model offers major societal benefits from increased provision and access to information-based services while imposing a very limited funding burden upon government.

Recommendation: Regulation should be transparent, independent and empowered. For every public sector information holder there should be a single, clear, source of regulatory authority and responsibility, and this ‘regulator’ should be largely independent of government.

This is essential if any pricing-policy is to work well and is especially important for marginal-cost pricing where the Government may be providing direct funding to the information holder. Policy-makers around the world have had substantial experience in recent years with designing these kinds of regulatory systems and this is, therefore, not an issue that should be especially difficult to address.

Copyright Term

Background

The optimal term of copyright has been a very live policy issue over the last decade. Recently, in the European Union, and especially in the UK, there has been much debate over whether to extend the term of copyright in sound recordings from its current 50 years.

The basic trade-off inherent in copyright is a simple one. On the one hand, increasing copyright yields benefits by stimulating the creation of new works but, on the other hand, it reduces access to existing works (the welfare ‘deadweight’ loss). Choosing the optimal term, that is the length of protection, presents these two countervailing forces particularly starkly. By extending the term of protection, the owners of copyrights receive revenue for a little longer. Anticipating this, creators of work which were nearly, but not quite, profitable under the existing term will now produce work, and this work will generate welfare for society both now and in the future. At the same time, the increase in term applies to all works including existing ones — those created under the term of copyright before extension. Extending term on these works prolongs the copyright monopoly and therefore reduces welfare by hindering access to, and reuse of, these works.

Recommendation: Reduce Copyright Term – And Certainly Do Not Extend It

Current copyright term is significantly over-extended. Calculations performed in the course of my own work indicate that optimal copyright term is likely around 15 years and almost certainly below 40 (the breadth of the estimates here are a direct reflection of the existing data limitations but this upper bound is still (far) below existing terms).

Even a simple present-value calculation would indicate that the incentives for creativity today offered by extra term 50 years or more in the future are negligible — while the effect on access to knowledge can be very substantial, especially when term extensions are applied retrospectively (as they almost always are).

It is also noteworthy that recent extensions, such as that for authorial copyright in the US (the CTEA) and the proposed extension of recording copyright in the EU, have been opposed well-nigh unanimously by academic economists and other IP scholars. Policy-making in this area should be evidence-based and designed to promote the broader welfare of society as a whole. Policies that appear to reflect nothing more than special-interest lobbying will only perpetuate the “marked lack of public legitimacy” which the Gowers report lamented, discouraging those who wish to contribute constructively to future Government policy-making in these areas, and making enforcement ever harder — effective enforcement, after all, depends on consent borne of respect as well as obedience coerced through punishment.

Prospect Magazine Article: Mashing the State

The lead article of Prospect Magazine’s February issue is a piece by by James Crabtree and Tom Chatfield entitled “Mashing the State”. It’s an in-depth look at the recent launch of data.gov.uk and its place in the wider context of government policy in relation to information — as well as information’s relation to governance (that “mashing” of the state …).

Where Does My Money Go gets a mention as does the “Cambridge” paper on pricing models at trading funds.

Argentina Extends Copyright Term in Recordings

Apparently, on the 11th of December 2009, Argentina extended copyright term in recordings from 50 to 70 years (see e.g. here, here and here).

Instead of the real reasons for extension — propping up the profits of a handful of multinational record labels and their shareholders (at the expense of everyone else) — the usual disingenuous justifications were once again being trotted out by music industry representatives.

First up was (all quotes from the billboard article):

The investment argument

“I would like to thank all those who supported this new law which will benefit the music community in Argentina,” tango master Leopoldo Federico, president of AADI, said in a statement. “It will improve incentives to invest in future recordings and also helps older performers who had faced losing their rights just when they need them the most.”

John Kennedy, chairman and chief executive of IFPI, also welcomed the legislation. “I am delighted that Argentina has strengthened the rights of performers and producers by extending the term of protection,” he said in a statement. “Argentina has a strong musical heritage and this reform means that producers will have a greater incentive to invest in the next generation of local talent.

But wait a moment: “producers” are already getting 50 years of monopoly protection. How much extra incentive are those 20 extra years going to provide?

Let’s do some simple calculations.

First off remember this is about incentives, which means it is about expected payoffs at the point of investment, i.e. when the recording is created. As such we should be dealing with “present value” figures, i.e. total revenue in “today’s terms”.

To work out the the effect of an extension then we need an idea for a) what future sales look like relative today (the cultural decay rate) and b) a way of putting future revenue in today’s term (the discount rate). The industry’s own analysis (commissioned for the Gowers review in the UK) used a nominal discount rate of 12.3% (pre-tax) and cultural decay rates of 3-20% (in nominal terms it appears). Let’s be generous and take the lowest possible cultural decay rate of 3%. Combined with the 12.3% discount rate this means that, on average, revenue is dropping at a substantial 14.3%!

Running this through a bit of basic maths (and I mean really basic — code inline below) we find that the 20 year extension will deliver a tiny 0.08% increase in revenues. Even halving the nominal discount rate to a very low figure like 6% only pushes up the revenue gain to just over 1% (1.1%). For those who like things visually here’s a picture:

revenue_impact

Aside: Of course there will be a lot of variation from the average — note that the relevant variation is not between hits and duds (as these may experience exactly the same decay!) but between records which go on selling at a reasonably steady rate and those which fade away fairly quickly. However, an “investor”, such as a record label, tends to “invest” in a whole “portfolio” of records precisely in order to reduce this “risky” variability (and in any case greater risk implies a higher discount rate assuming the investor is risk averse). As such the average revenue increase is precisely what an “investor” will use when making decisions such as how many recordings to fund.

Next up was:

The pension for performers argument

“I would like to thank all those who supported this new law which will benefit the music community in Argentina,” tango master Leopoldo Federico, president of AADI, said in a statement. “It will improve incentives to invest in future recordings and also helps older performers who had faced losing their rights just when they need them the most.

But life expectancy in Argentina is 75 years — and is probably shorter for most performers who are old today. So, unless a performer is especially prolific in their teens, 50 years of copyright monopoly is already enough to cover them in their old(er) age.

And anyway haven’t performers heard about pensions or saving for the future — everyone else has. I don’t expect the plumber I pay today to fix by sink to come back in 50 years asking for additional payment for a pension plan! Instead I expect the plumber to save some of the income received today to use in retirement.

Moreover, as the calculations above should make clear, copyright income 50+ years in the future from recordings today is likely (on average) to be tiny (0.08% of the revenue received during the first 50 years!). As such there is no way the average performer could rely on income from a 20 year term extension 50 years in the future to support them in their old age. Just like everyone else they will need to save some of the income during that first 50 years.

Aside: in fact it is is more like 10 years or even 5 years, as for most recordings, the vast majority of the revenue they will ever generate will come in the first 5 or 10 years after release.

Last up we had:

The cultural argument

Javier Delupí, CAPIF’s executive director, added: “This new law is good news for Argentine culture. It promotes the creation of new music and safeguards the rights of performers and producers both here and abroad.”

But:

  • The investment argument is completely invalid (see above) and hence there won’t be any “promoting the creation of new music”.
  • In fact, to the contrary, the extension will impede the creation of new works by reducing the public domain on which all creators can and do build.
  • Moreover, an extension transfers money to (older and already successful) performers away from younger and less well-known ones.
  • Depending on how comparison of terms is implemented an extension actually harms the balance of payments of the enacting country (e.g. the UK looses out from a term extension in recordings)

So, no, term extensions aren’t good for (Argentine) culture — though they may be good for CAPIF (Representando a la Industria Argentina de la Música).

Conclusion

It’s time we start calling a spade a spade: this term extension is a simple, and highly inefficient, subsidy to the major record labels plus, perhaps, a few, already highly successful, performers, which is paid for by the general populace.

If it can command widespread assent in that form, then, fine, let it pass! But I sincerely doubt the likelihood of this occurrence. If this is so, then the passage of such bills, is nothing more or less than a straightforward “robbery upon the public” — in the 150 year-old words of Henry Warburton, radical opponent of the UK’s term extension of the 1840s.

Colophon

Here’s the python script used for the revenue calculations above, together with the code to generate the figure.

#!/usr/bin/env python
def extra_revenue(term, extension, decay, irate):
    dfactor = 1/(1+decay+irate)
    def geometric(df, NN):
        return (1-df**(NN+1))/(1-df)
    total = geometric(dfactor, term)
    textension = dfactor**term * geometric(dfactor, extension)
    increase = textension/total
    print('Term, Extension, decay, irate: %s %s %s %s' % (term, extension,
        decay, irate))
    print('Percentage increase: %s' % (100*increase))

extra_revenue(50, 20, 0.03, 0.123)
extra_revenue(50, 20, 0.05, 0.123)
extra_revenue(50, 20, 0.03, 0.06)
extra_revenue(50, 20, 0.04, 0.06)

import math
def visualize():
    import matplotlib.pyplot as pyplot
    # normalize main square to 10x10 = 100
    pyplot.bar(0, 10, width=10, fc='red', alpha=0.6)
    edge = math.sqrt(0.08)
    pyplot.bar(14, edge, width=edge, bottom=5, align='center', fc='blue', alpha=0.6)

    pyplot.bar(14, 1, width=1, bottom=1, align='center', fc='blue', alpha=0.6)

    pyplot.figtext(0.15, 0.7, 'Present Value of Revenue\nUnder Existing\n50y Term', multialignment='center', va='top')
    pyplot.figtext(0.65, 0.7, 'PV of Extra Revenue\nfrom 20y Extension',
            multialignment='center', va='top')
    pyplot.figtext(0.7, 0.4, '1% of Existing\n Revenue',
            multialignment='center', va='top')

    # hack to get rid of axes ...
    ax = pyplot.gca()
    ax.set_frame_on(False)
    pyplot.yticks([],[])
    pyplot.xticks([],[])

    fig = pyplot.figure(1)
    fig.set_size_inches(5, 3)
    pyplot.savefig('revenue_impact.png')

visualize()
print('Saved image to disk')

Open Notebook Social Science

The other day I posted up some work-in-progress on the subject of patterns of knowledge production.

That material is still in a fairly preliminary state. However, my decision to release it it in this form was a conscious decision and part of an ongoing attempt on my part to practice a more open “release early, release often” approach to research.

In doing this I’m drawing direct inspiration from the open source and open notebook (science) communities and seeking to engage in what might be termed open notebook social science!

I think most researchers (including myself) feel a reluctance to put out material that isn’t at a reasonable level of maturity. While there are some good reasons for this, I think the main motivations are less positive, and are primarily to do with fear: be it of criticism or that your ideas are “taken” by others. While such fears can have some basis, it seems to me the benefits of an open approach — in terms of visibility, dissemination, and potential for collaboration — significantly outweigh any of the associated risks.

Over the last year, I’ve already been making some effort to move in this direction but from this point on I’m aiming to do this more thoroughly and methodically. A first step in this will be to put all the “patterns” and data online.

Research Fellowship on Economics of PSI

There’s an interesting 6 month fellowship at OPSI for work on economics of public sector information being funded by ESRC and National Archives. Deadline for applications is 6th August:

Valuing information: an economic analysis of public sector information and its re-use

Length of Fellowship: Six months

Proposed start date: Autumn 2009

Applications to be submitted as soon as possible (and by 6 August)

Location of Fellowship: The National Archives’ sites (Central London and Kew)

As part of its Placement Fellowship Scheme, the Economic and Social Research Council (ESRC) and The National Archives welcome applications from academic economists interested in working in a research capacity in the Office of Public Sector Information (OPSI). OPSI is part of The National Archives, a member of the Ministry of Justice family, working to set standards, deliver access and encourage the re-use of PSI.

The Placement Fellowship Scheme encourages social science researchers to spend time within a partner organisation to undertake policy relevant research and to develop the research skills of partner employees. The Fellowship will be jointly funded by the ESRC and OPSI while the Fellow remains employed by his or her institution.

See the document below for further details on the Placement Fellowship: http://www.nationalarchives.gov.uk/documents/esrc-placement-fellowship-june-09.pdf