Category Archives: Open Knowledge Foundation

Candy Crush, King Digital Entertainment, Offshoring and Tax

Sifting through the King Entertainment F-1 filing with the SEC for their IPO (Feb 18 2014) I noticed the following in their risk section:

The intended tax benefits of our corporate structure and intercompany arrangements may not be realized, which could result in an increase to our worldwide effective tax rate and cause us to change the way we operate our business. Our corporate structure and intercompany arrangements, including the manner in which we develop and use our intellectual property and the transfer pricing of our intercompany transactions, are intended to provide us worldwide tax efficiencies [ed: for this I read - significantly reduce our tax-rate by moving our profits to low-tax jurisdictions ...]. The application of the tax laws of various jurisdictions to our international business activities is subject to interpretation and also depends on our ability to operate our business in a manner consistent with our corporate structure and intercompany arrangements. The taxing authorities of the jurisdictions in which we operate may challenge our methodologies for valuing developed technology or intercompany arrangements, including our transfer pricing, or determine that the manner in which we operate our business does not achieve the intended tax consequences, which could increase our worldwide effective tax rate and adversely affect our financial position and results of operations.

It is also interesting how they have set up their corporate structure going “offshore” first to Malta and then to Ireland (from the “Our Corporate Information and Structure” section):

We were originally incorporated as Midasplayer.com Limited in September 2002, a company organized under the laws of England and Wales. In December 2006, we established Midasplayer International Holding Company Limited, a limited liability company organized under the laws of Malta, which became the holding company of Midasplayer.com Limited and our other wholly-owned subsidiaries. The status of Midasplayer International Holding Company Limited changed to a public limited liability company in November 2013 and its name changed to Midasplayer International Holding Company p.l.c. Prior to completion of this offering, King Digital Entertainment plc, a company incorporated under the laws of Ireland and created for the purpose of facilitating the public offering contemplated hereby, will become our current holding company by way of a share-for-share exchange in which the existing shareholders of Midasplayer International Holding Company p.l.c. will exchange their shares in Midasplayer International Holding Company p.l.c. for shares having substantially the same rights in King Digital Entertainment plc. See “Corporate Structure.”

Here’s their corporate structure diagram from the “Corporate Structure” section (unfortunately barely readable in the original as well …). As I count it there are 19 different entities with a chain of length 6 or 7 from base entities to primary holding company.

Northern Mariana Islands Retirement Fund Bankruptcy

Back on April 17 2012 the Northern Mariana Islands Retirement Fund attempted to file for bankruptcy under Chapter 11. There was some pretty interesting reading in their petition for bankruptcy including this section (para 10) which suggests some pretty bad public financial management (emphasis added): “Debtor has had difficulty maintaining healthy funding levels due to […]

Open Data Maker Night London No 3 – Tuesday 16th July

The next Open Data Maker Night London will be on Tuesday 16th July 6-9pm (you can drop in any time during the evening). Like the last two it is kindly hosted by the wonderful Centre for Creative Collaboration, 16 Acton Street, London.

Look forward to seeing folks there!

What

Open Data Maker Nights are informal events focused on “making” with open data – whether that’s creating apps or insights. They aren’t a general meetup – if you come, expect to get pulled into actually building something, though we won’t force you!

Who

The events usually have short introductory talks about specific projects and suggestions for things to work on – it’s absolutely fine to turn up knowing nothing about data or openness or tech as, there’ll an activity for you to help and someone to guide you in contributing!

Organize your own!

Not in London? Why not organize your own Open Data Maker night in your city? Anyone can and it’s easy to do – find out more »

data.okfn.org – update no. 1

This is the first of regular updates on Labs project http://data.okfn.org/ and summarizes some of the changes and improvements over the last few weeks.

1. Refactor of site layout and focus.

We’ve done a refactor of the site to have stronger focus on the data. Front page tagline is now:

We’re providing key datasets in high quality, easy-to-use and open form

Tools and standards are there in a clear supporting role. Thanks to all the suggestions and feedback on this and welcome more - we’re still iterating.

2. Pull request data workflow

There was a nice example of the pull request data workflow being used (by a complete stranger!): https://github.com/datasets/house-prices-uk/pull/1

3. New datasets

For example:

Looking to contribute data check out the instructions http://data.okfn.org/about/contribute#data and the outstanding requests: https://github.com/datasets/registry/issues

4. Tooling

5. Feedback on standards

There’s been a lot of valuable feedback on the data package and json table schema standards including some quite major suggestions (e.g. substantial change to JSON Table Schema to align more closely with JSON Schema - thx to jpmckinney)

Next steps

There’s plenty more coming up soon in terms of data and the site and tools.

Get Involved

Anyone can contribute and its easy – if you can use a spreadsheet you can help!

Instructions for getting involved here: http://data.okfn.org/about/contribute

Update on PublicBodies.org – a URL for every part of Government

This is an update on PublicBodies.org - a Labs project whose aim is to provide a “URL for every part of Government”: http://publicbodies.org/

PublicBodies.org is a database and website of “Public Bodies” – that is Government-run or controlled organizations (which may or may not have distinct corporate existence). Examples would include government ministries or departments, state-run organizations such as libraries, police and fire departments and more.

We run into public bodies all the time in projects like OpenSpending (either as spenders or recipients). Back in 2011 as part of the “Organizations” data workshop at OGD Camp 2011, Labs member Friedrich Lindenberg scraped together a first database and site of “public bodies” from various sources (primarily FoI sites like WhatDoTheyKnow, FragDenStaat and AskTheEU).

We’ve recently redone the site converting the sqlite DB to simple flat CSV files:

The site itself is now super-simple flat-files hosted on s3 (build code here). Here’s an example of the output:

The simplicity of CSV for data plus simple templating to flat-files is very attractive. There are some drawbacks such as changes to primary template resulting in a full rebuild and upload of ~6k files so, especially as the data grows, we may want to look into something a bit nicer but for the time being this works well.

Next Steps

There’s plenty that could be improved e.g.

  • More data - other jurisdictions (we only cover EU, UK and Germany) + descriptions for the bodies (this could be a nice crowdcrafting app)
  • Search and Reconciliation (via nomenklatura)
  • Making it easier to submit corrections or additions

The full list of issues is on github here: https://github.com/okfn/publicbodies/issues

Help is most definitely wanted! Just grab one of the issues or get in touch

Quick and Dirty Analysis on Large CSVs

I’m playing around with some large(ish) CSV files as part of a OpenSpending related data investigation to look at UK government spending last year – example question: which companies were the top 10 recipients of government money? (More details can be found in this issue on OpenSpending’s things-to-do repo).

The dataset I’m working with is the consolidated spending (over £25k) by all UK goverment departments. Thanks to the efforts of of OpenSpending folks (and specifically Friedrich Lindenberg) this data is already nicely ETL’d from thousands of individual CSV (and xls) files into one big 3.7 Gb file (see below for links and details).

My question is what is the best way to do quick and dirty analysis on this?

Examples of the kinds of options I was considering were:

  • Simple scripting (python, perl etc)
  • Postgresql - load, build indexes and then sum, avg etc
  • Elastic MapReduce (AWS Hadoop)
  • Google BigQuery

Love to hear what folks think and if there are tools or approaches they would specifically recommend.

The Data

Cleaning up Greater London Authority Spending (for OpenSpending)

I’ve been working to get Greater London Authority spending data cleaned up and into OpenSpending. Primary motivation comes from this question:

Which companies got paid the most (and for doing what)? (see this issue for more)

I wanted to share where I’m up to and some of the experience so far as I think these can inform our wider efforts - and illustrate the challenges just getting and cleaning up data. I note that the code and README for this ongoing work is in a repo on github: https://github.com/rgrp/dataset-gla

Data Quality Issues

There are 61 CSV files as of March 2013 (a list can be found in scrape.json).

Unfortunately the “format” varies substantially across files (even though they are all CSV!) which makes using this data real pain. Some examples:

  • no of fields and there names vary across files (e.g. SAP Document no vs Document no)
  • number of blank columns or blank lines (some files have no blank lines (good!), many have blank lines plus some metadata etc etc)
  • There is also at least one “bad” file which looks to be an excel file saved as CSV
  • Amounts are frequently formatted with “,” making them appear as strings to computers.
  • Dates vary substantially in format e.g. “16 Mar 2011”, “21.01.2011” etc
  • No unique transaction number (possibly document number)

They also switched from monthly reporting to period reporting (where there are 13 periods of approx 28d each).

Progress so far

I do have one month loaded (Jan 2013) with a nice breakdown by “Expenditure Account”:

http://openspending.org/gb-local-gla

Interestingly after some fairly standard grants to other bodies, “Claim Settlements” comes in as the biggest item at £2.3m

Progress on the Data Explorer

This is an update on progress with the Data Explorer (aka Data Transformer).

Progress is best seen from this demo which takes you on a tour of house prices and the difference between real and nominal values.

More information on recent developments can be found below. Feedback is very welcome - either here or the issues https://github.com/okfn/dataexplorer.

House prices tutorial

What is the Data Explorer

For those not familiar, the Data Explorer is a HTML+JS app to view, visualize and process data just in the browser (no backend!). It draws heavily on the Recline library and features now include:

  • Importing data from various sources (the UX of this could be much improved!)
  • Viewing and visualizing using Recline to create grids, graphs and maps
  • Cleaning and transforming data using a scripting component that allows you to write and run javascript
  • Saving and sharing: everything you create (scripts, graphs etc) can be saved and then shared via public URL.

Note, that persistence (for sharing) is to Gists (here’s the gist for the House Prices demo linked above). This has some nice benefits such as versioning; offline editing (clone the gist, edit and push); and bl.ocks.org-style ability to create a gist and have it result in public viewable output (though with substantial differences vs blocks …).

What’s Next

There are many areas that could be worked on – a full list of issues is in github. The most important I think at the moment are:

I’d very interested in people’s thoughts on the app so far and what should be done next and code contributions are also very welcome (the app has already benefitted from the efforts of many people including the likes of Martin Keegan and Michael Aufreiter to the app itself; and from folks like Max Ogden, Friedrich Lindenberg, James Casbon, Gregor Aisch, Nigel Babu (and many more) in the form of ideas, feedback, work on Recline etc).

Recline JS – Componentization and a Smaller Core

Over time Recline JS has grown. In particular, since the first public announce of Recline last summer we’ve had several people producing new backends and views (e.g. backends for Couch, a view for d3, a map view based on Ordnance Survey’s tiles etc etc).

As I wrote to the labs list recently, continually adding these to core Recline runs the risk of bloat. Instead, we think it’s better to keep the core lean and move more of these “extensions” out of core with a clear listing and curation process - the design of Recline means that new backends and views can extend the core easily and without any complex dependencies.

This approach is useful in other ways. For example, Recline backends are designed to support standalone use as well as use with Recline core (they have no dependency on any other part of Recline - including core) but this is not very obvious as it stands (where the backend is bundled with Recline). To take a concrete example, the Google Docs backend is a useful wrapper for the Google Spreadsheets API in its own right. While this is already true, when this code is in the main Recline repository it isn’t very obvious but having the repo split out with its own README would make this much clearer.

So the plan is …

  • Announce this approach of a leaner core and more “Extensions”
  • Identify first items to split out from core - see this issue
  • Identify what components should remain in core? (I’m thinking Dataset + Memory DataStore plus one Grid, Graph and Map)

So far I’ve already started the process of factoring out some backends (and soon views) into standalone repos, e.g. here’s GDocs:

https://github.com/okfn/recline.backend.gdocs

Any thoughts very welcome and if you already have Recline extensions lurking in your repos please add them to the wiki page

Archiving Twitter the Hacky Way

There are many circumstances where you want to archive a tweets - maybe just from your own account or perhaps for a hashtag for an event or topic.

Unfortunately Twitter search queries do not give data more than 7 days old and for a given account you can only get approximately the last 3200 of your tweets and 800 items from your timeline. [Update: People have pointed out that Twitter released a feature to download an archive of your personal tweets at the end of December - this, of course, still doesn’t help with queries or hashtags]

Thus, if you want to archive twitter you’ll need to come up with another solution (or pay them, or a reseller, a bunch of money - see Appendix below!). Sadly, most of the online solutions have tended to disappear or be acquired over time (e.g. twapperkeeper). So a DIY solution would be attractive. After reading various proposals on the web I’ve found the following to work pretty well (but see also this excellent google spreadsheet based solution).

The proposed process involves 3 steps:

  1. Locate the Twitter Atom Feed for your Search
  2. Use Google Reader as your Archiver
  3. Get your data out of Google Reader (a 1000 items at a time!)

One current drawback of this solution is that each stage has to be done by hand. It could be possible to automate more of this, and especially the important third step, if I could work out how to do more with the Google Reader API. Contributions or suggestions here would be very welcome!

Note that the above method will become obsolete as of March 5 2013 when Twitter close down RSS and Atom feeds - continuing their long march to becoming a fully more closed and controlled ecosystem.

As you struggle, like me, to get precious archival information out of Twitter it may be worth reflecting on just how much information you’ve given to Twitter that you are now unable to retrieve (at least without paying) …

Twitter Atom Feed

Twitter still have Atom feeds for their search queries:

http://search.twitter.com/search.atom?q=my_search

Note that if you want to search for a hash tag like #OpenData or a user e.g. @someone you’ll need to escape the symbols:

http://search.twitter.com/search.atom?q=%23OpenData

Unfortunately twitter atom queries are limited to only a few items (around 20) so we’ll need to continuously archive that feed to get full coverage.

Archiving in Google Reader

Just add the previous feed URL in your Google Reader account. It will then start archiving.

Aside: because the twitter atom feed is limited to a small number of items and the check in google reader only happens every 3 hours (1h if someone else is archiving the same feed) you can miss a lot of tweets. One option could be to use Topsy’s RSS feeds http://otter.topsy.com/searchdate.rss?q=%23okfn (though not clear how to get more items from this feed either!)

Gettting Data out of Google Reader

Google Reader offers a decent (though still beta) API. Unoffical docs for it can be found here: http://undoc.in/

The key URL we need is:

http://www.google.com/reader/atom/feed/[feed_address]?n=1000

Note that the feed is limited to a maximum of 1000 items and you can only access it for your account if you are logged in. This means:

  • If you have more than a 1000 items you need to find the continuation token in each set of results and then at &c={continuation-token} to your query.
  • Because you need to be logged in your browser you need to do this by hand :-( (it may be possible to automate via the API but I couldn’t get anything work - any tips much appreciated!)

Here’s a concrete example (note, as you need to be logged in this won’t work for you):

http://www.google.com/reader/atom/feed/http://search.twitter.com/search.atom%3Fq%3D%2523OpenData?n=1000

And that’s it! You should now have a local archive of all your tweets!

Appendix

Increasing Twitter is selling access to the full Twitter archive and there are a variety of 3rd services (such as Gnip, DataSift, Topsy and possibly more) who are offering full or partial access for a fee.