The proposed Geospatial Information Regulation Bill, 2016, threatens to destroy the innovation ecosystem. A better option is to switch to a simple registration-based system
The Ministry of Home Affairs recently posted the draft of a bill aimed at regulating the acquisition and use of geospatial information pertaining to India. In brief, the provisions of the draft Geospatial Information Regulation Bill, 2016, make it illegal to acquire and even maintain previously acquired Indian geospatial data without applying for and receiving a licence from an authority that is to be created for this purpose. The remit of this authority, as per the draft, is, first, to conduct “sensitivity checks” on the geospatial information being used, and, second, to “screen” the “credentials” of both end users and end applications. Media reports have tended to focus on the aspect of the bill that talks about heavy penalties for misrepresenting the boundaries of India, but let us instead focus on the important aspects that pertain to the data ecosystem. The bill, as written, raises some questions.
A problem of logistics
What happens if the data need an update? The draft bill’s definition of geospatial information has a wide remit. It covers information that we think of as relatively stable but also talks about “graphical or digital data depicting… man-made physical features”. Geospatial information, especially when so widely defined, keeps changing. In Delhi, for example, we see roads being modified, overpasses being constructed, temporary and permanent diversions being created almost on a daily basis. So, what happens when the data change?
Consider the illustrative, though not earthshakingly important, case of your favourite restaurant discovery app: will it have to apply for a new licence every time a new restaurant opens (or closes) in Hauz Khas Village? Effectively it will have to, since the draft bill proposes that only data that bear the watermark of the vetting authority be used for display. Changing the name of a restaurant in such data would amount to tampering with watermarked data. Not propagating updates till security clearance is released may affect the business model of businesses premised on providing up-to-date information. The bill promises a three-month turnaround on all clearances. This might not be quick enough, even if it was feasible, which leads us to the next question.
Do we have the bandwidth to handle all applications for this usage inside and outside India? It is hard to estimate how many different non-governmental services inside and outside India are currently using Indian geospatial data, but we can safely say that there are a large number with significant impact. Add to these all those 17-year-olds dreaming of start-up glory who are mashing Google maps into their soon-to-be-world-dominating app. A government regulator that is yet to be set up will need hundreds of experts who can “vet” terabytes of data from each applicant.
The logistics of getting these data across to the vetting authority alone boggles the mind, forget about the logistics of hiring and training these hundreds of experts. Unless this bill, on becoming an act, manages to single-handedly kill the innovation ecosystem that depends on geospatial data, the number of requests will keep going up. And all these people will be “acquiring” and wanting to propagate updates. Which further leads us to the next question.
The complex data ecosystem
Does every single end user of such data also need a licence? Large organisations like Google, which are acquiring and making geospatial data available through their application programming interfaces (APIs), are in some sense at the lowest level of an application stack which could potentially have several layers (and probably already has). Application A buys a service that uses geospatial data from application B that has in turn bought it from provider C who has licensed it from organisation D. Or, in a more complex turn of events, app A mashes up data from services B, C and D which in turn have bought their data from E, F and G and, guess what, F and G have some kind of data-sharing agreement. How will A get its data acquisition vetted?
The complexity of the ecosystem and the trajectories such data can take are only limited by the imagination of developers and service creators working on different kinds of problems in a host of different sectors. And, in fact, typically such complexity emerges organically as different actors in the innovation ecosystem work to create new efficiencies or leverage existing ones, and so it is something to be encouraged. To satisfactorily “vet” the complex mishmashes of data that are bound to emerge over time will be a challenging task; in fact some of the questions raised in vetting involved data provenance patterns may almost be research-level questions. All this will further burden the vetting authority and stretch its capabilities.
An alternative modality that can serve national security purposes would involve switching to a simple registration-based system that doesn’t make the acquisition of a licence a precondition to using data. However, such a registration-based system is also fraught with danger in a framework that insists on scrutinising the credentials of every end user. A clear distinction must be made between the producers and consumers of geospatial data. In order to not constrict the innovation ecosystem, the definition of consumers must be as wide as possible. It may be okay to require all publishers of geospatial data to register with the security-vetting authority and provide an online window through which the authority can conduct an audit of their data. The vetting authority can go through the data and raise an objection if it finds anything objectionable, and it can do this in its own time. In the meantime the data can be used by end users and updated by the publisher as required. In other words, the onus has to be on the vetting authority to regularly check that the data are in order, rather than on the service. By shifting the onus onto the service we run the risk of creating a significant roadblock for a major part of the innovation ecosystem. This is undesirable.
Amitabha Bagchi is an Associate Professor of Computer Science and Engineering and a member of the Data Analytics and Intelligence Research group at IIT-Delhi. He is also a best-selling novelist.