Biweekly notes #1

In an attempt to write more often I’ve decided to start a biweekly series that illuminates my activities, thinking and readings. In doing this I’ll be following the ways of Charl and Alper. I’ve marveled plenty at their ability to push posts on a regular basis and have finally collected enough courage and content to write on a more regular basis. Here goes.

Open Geo Data

Friday I presented at /dev/haag about (open) geo data. The talk was all about demystifying standardised geo web services. These are underused due to, among others, their invisibility, verbosity and weak presentation of documentation and tools. At /dev/haag I aimed to tackle the first of these by discussing the Dutch geo scene, what geo standards are and how easy they are to use once you get a grip of the many abbreviations. For instance, downloading the contents of a Web Feature Service is as easy as

ogr2ogr -f GeoJSON data.geojson WFS:"http://..." layer_name

The city of Chicago made a splash recently by releasing their building footprint data through github. I rhetorically asked why we need SDIs and geoportals when we can git clone datasets. The question struck a cord as it was retweeted a couple of times. This didn’t surprise me, SDIs and geo tech in general enjoy a certain degree of disdain among developers. In the presentation I argue that, yes, there are weaknesses but in the end the services work quite okay and you can do some nifty stuff with them

ogr2ogr -f GeoJSNON data.gejson WFS:"http://..." gemeenten2011
 -where "population > 100000"

Check out the slides on Speakerdeck for the whole story and why you should care.

Dutch national geo register

During the /dev/haag presentation I also tried to shine some light on the Dutch geo information scene as it is, to put it mildly, in a state of disarray. The first point of contact, the Nationaal Georegister is extremely awkward to operate. It is ridden with bugs and bad design; finding information is close to impossible. @eugenetjoa summed the portal’s usefulness succinctly by comparing it to a big Easter egg.

Seeing that the usability did improve considerably with the release of version 2.0 I figured the NRG team can use some feedback. The NGR’s feedback mechanisms operate through forms and email which make the feedback process an opaque ordeal. Nowadays we have access to better feedback tools. I thus opened an issue tracker on github. My aim is to draw the feedback discussion out in the open.

(Un)surprisingly the tracker took off, aided in part by a broken release of several 50 cm aerial imagery services. Within minutes of the release a couple of developers filled the tracker with bug reports. Shortly thereafter the bugs were fixed. It’s unclear whether the tracker played a role in their fixing.

While the newly formed tiny community was reporting issues I had a chance to speak to one of NGR’s upkeepers. The tracker’s superiority over NGR’s form-based feedback flow quickly became apparent and was solidified when @miblon posted a short code snippet (in other words: documentation) showing how to use one of the services in OpenLayers. This short demo got me an appointment with NGR’s community/strategy manager to discuss the issue tracker and several other improvement suggestions I submitted to them.

To strenghten this bottom-up approach I’ll also be meeting @eugenetjoa and @geogoeroe to discuss guerilla improvement NGR efforts.

The Hague Open Data

Last Monday I visited the fifth installment of The Hague’s open data Meetup. Monday’s turnout wasn’t great but we had a good brainstorm session about this year’s strategy and developments.

Opening data is still a challenge primarily due to difficulties in effectively illuminating city officials on the philosophy and benefits of opening data. Arn and Gerrit Jan are spearheading The Hague’s open data efforts. They’ve been making headway with the municipality and are hosting the Meetups.

Based on their input we discussed a shift of focus from releasing as much data as possible and figuring out ex post what to do with it, to building pleasant urban augmentations that use open/public data. In the spirit of Random Hacks of Kindness and Code for America, we theorized about organizing developers and the community at large to aid The Hague’s open data activities prior to the data’s actual release and to shift the focus from apps as the only way one deals with public data (as is predominantly the case in Europe). Some of the concepts we discussed are bullet listed below.

city legibility We value our environment more when we know more about it(pdf). Despite all the data and tech we have at our disposal, most cities remain illegible. The Hague, for instance, is brimful with headquarters of international organisations. Walking through the city these are, however, difficult to discern. We’ll be looking into ways and projects to make the city easier to query.

urban dashboard Get some basic info about the city’s current state. Leading example might be the London city dashboard.

data fix day Cities and municipalities often lack the needed resources to make a dataset fit for use in e.g. a hacking session. In the spirit of RHoK and Code for America, we came up with the idea of helping municipalities fix their data one day before a hackthon. In this way hackers don’t have to spend precious time during an event or thereafter struggling with mundane tasks such as cleaning and georeferencing.

adopt a dataset We also acknowledged the sad state of many datasets in the Dutch data stores. Many a dataset is simply a dump: uncleaned, without metadata, incomplete, not georeferenced, etc. Maintaining datasets is a demanding task which at times requires a some degree of expertise. Cities have limited resources; many are unable to maintain a dataset after release. We thus fantasised about putting datasets up for adoption. By adopting a dataset, members of the community pledge to take care of and maintain it: clean, georeference, transform in suitable formats, document, answer questions about it, etc. such that others can quickly get work done.

spatial focus Instead of focusing on the quantity of released data we thought it productive to instead focus on a small part of the city and get as much data as possible for that area. Focusing on a single area makes the work done in pilot projects more visible as all digital augmentations are automatically brought together. Focusing on a specific are goes hand in hand with the above ideas: make a small part of the city hyper-legible using high quality (through adoption) open/public data, crowdsource missing information and location specific needs.

temporal focus Use open data and crowd efforts to make urban events like the yearly Parkpop more legible.

The next Meetup is planned for May 14th. See you there!



Leave a Reply

extra smooth footnotes