For some time now, governments, industry, and nonprofit organizations have been touting the benefits of “open data,” or government data that is released to industry and citizens. But now that some governments are actually starting to do implement open data, the discussion is gaining more existential nuance. Everyone’s still in favor of open data; they’re just spending more time talking about what it should be for.
Part of the impetus for these discussions is the meeting earlier this month of the Open Government Partnership, where 62 countries got together in London to talk about the future of open data. Naturally, several major reports were released in conjunction with this meeting. These ranged from the Open Knowledge Foundation’s Open Data Index 2013, which ranked countries in order of which was most open, to the Open Data Barometer, published by the World Wide Web Foundation.
Not surprisingly, given the praise that its government website has garnered, the U.K. ranked first in the openness of its data, followed by the U.S., then Denmark, Norway, and the Netherlands. Of the countries assessed, Cyprus, St. Kitts & Nevis, the British Virgin Islands, Kenya and Burkina Faso ranked lowest, the Foundation said.
There was also an eye-popping report from the McKinsey Global Institute, “Open Data: Unlocking innovation and performance with liquid information,” claiming that open data could be worth $3 trillion. This certainly caught everyone’s attention. Open data is fine and noble and all, but here’s a chance to make some money.
At the same time, some governments — rather than just throwing all the data out willy-nilly — are starting to take more of a look at what they’re putting out there. In Australia, for example — which ranked sixth worldwide — the government recently pared down its datasets by a third, claiming that some were “junk,” and cleaned up others (e.g., it consolidated 200 files in a single dataset). Developers are also pointing out that government data, while nominally “open,” isn’t necessarily available in a form that’s easy to use, but instead requires tweaking and massaging.
There’s also the issue — as the number of datasets grows — of actually finding the data, a problem that the semantic web is intended to help solve.
So more about that $3 trillion McKinsey figure. First of all, as it turns out, that amount will be accumulated primarily based on money consumers will save, due to being able to make better decisions with the additional information, rather than on money industries will make. For example, the greatest potential benefit is learning how to teach people better so they can earn more over their lifetimes, while the greatest potential value is increased productivity and time saving for individuals from using open data to reduce travel times, writes Noreen Seebacher in CMSWire.
Second, some are concerned that focusing on the money to be made from open data is contrary to the philosophy that the open data movement started out with –a focus on accountability and transparency. To a certain extent, the potential windfall was used as a carrot (to mix metaphors) to get the open data camel’s nose into the tent, as it were, by enticing less idealistic government representatives with talk of business cases and ROI. (Ironically, writes the Foundation, even relatively open countries have little corporate information available in an easy-to-use form.)
But there may be so much emphasis on technology and making money, some fear, that the higher purposes of open data have been forgotten. “Today a regime can call itself ‘open’ if it builds the right kind of web site — even if it does not become more accountable,” write Harlan Yu, of Princeton’s Center for Information Technology Policy, and David G. Robinson, of Yale Law School’s Information Society Project, in their oft-cited 2012 paper, “The New Ambiguity of 'Open Government.” “This shift in vocabulary makes it harder for policymakers and activists to articulate clear priorities and make cogent demands.”
“Open government advocates risk conflating technological openness with political openness — of associating the openness and usability of information, software, standards, and the digital architecture of government with the openness of official institutions and processes to the citizens they are supposed to serve,” agrees Jonathan Gray, Director of Policy and Ideas at the Open Knowledge Foundation.
The question, then, becomes how to make politically open data — even if it could be embarrassing or controversial – happen. Does some potentially embarrassing or controversial data exist, and it just hasn’t been released yet? Perhaps countries are waiting to release the data until someone asks for it – but like the unattended tree in the forest, how can people ask for it if they don’t know it’s there? Or, as anyone who’s filed a Freedom of Information Act request might recognize, will countries use a purloined letter approach — having the controversial data buried in a huge pile of other, inoffensive data?
What’s more likely is that potentially embarrassing data is just never accumulated in the first place, either because it’s too hard to get, or because being able to say “Gosh, we just don’t have that” makes it simpler to deal with requests for it. Ultimately, data is only as open as the intentions of the people collecting it.
Simplicity 2.0 is where we examine the intricate and transitory world of technology—through a Laserfiche lens. By keeping an eye on larger trends, we aim to make software that’s relevant to modern day workers, rather than build technology for technology’s sake.
Subscribe to Simplicity 2.0 and follow us on Twitter. If what we’re saying piques your interest, head over to Laserfiche.com where you’ll see how we apply the lessons learned on Simplicity 2.0 to our own processes, products and industry.