A full analysis of the UK Information Commissioner's "Anonymisation code of practice: managing data protection risk" will take time and working knowledge of how the code is used in practice.
At the launch, the ICO signalled that while they believed the code was now up to scratch, they were open to additions and clarifications given that it is the first document of its kind in the world. We applaud them for this; the code is likely to be copied internationally, so it is particularly important that we get it right.
One of the main concerns in the consultation draft was the amount of ambiguity. The current document has strengthened the language around risk and what action should be taken if a data controller does anything wrong. At the launch of the code, the ICO stated clearly that anyone actively attempting to reidentify someone from the data will be treated as a data controller, and would therefore be expected to have registered as such and to have acquired consent. This is a procedural and legislative solution to many of the problems posed. We also welcome the ICO's clear statement of possible fines for breaches of the code, although we believe the fines themselves are barely adequate. More useful is the proposal (but not requirement) that data controllers create a "disaster recovery" process as part of their governance procedures, to "address what [they] will do if re-identification does take place and individuals’ privacy is compromised". This should be mandatory, and the process should be published.
While the section headings of the new code are almost identical to those in the draft, the text within has been substantially rewritten and improved. It is now much clearer who falls within the remit of the code and who does not, and what the penalties will be for misuse of anonymised data. However, we hope that when other countries begin developing their own anonymisation codes of practice, they include stonger safeguards than the minimal ones present in the British code.
Unfortunately, the most significant questions around anonymisation and data releases currently on the agenda - those connected with NHS data sharing and the opening up of the National Pupil Database - are not resolved by the code. In neither of these cases is the "motivated intruder" test primarily applicable - the "intruder" may well be someone to whom the goverment body gave or sold the data in the first place. The code also seems to approve compulsory data collection if the case is made that an opt-out system would damage the anonymised data set.
There is still a need for a different document focused specifically on organisations in which users have no choice but to hand over their data if they wish to use a service, such as schools and the NHS. This may be an area where the Open Data Institute
can offer a strong viewpoint. The ODI's involvement with the UK Anonymisation Network
, which was designed to provide unique expertise on what the code means in practice, puts them in a strong place to draft such a document, and gives it the necessary standing, practical utility and reference.
Most of our concerns are not about what is in the code, but what is left out. In a swiftly evolving area, we should be wary of omissions. But overall, it could be worse.