Privacy International defends the right to privacy across the world, and fights surveillance and other intrusions into private life by governments and corporations. Read more »


Error message

  • Notice: Undefined index: title in theme_book_title_link() (line 342 of /home/privacyint/www/privacyinternational.org/modules/book/book.module).
  • Notice: Undefined index: href in theme_book_title_link() (line 342 of /home/privacyint/www/privacyinternational.org/modules/book/book.module).
Chapter: 

Summary

How an organisation conceptualises and handles the personal data it holds may not accurately reflect the reality of the organisation, the data or the subjects. For example, data about who has entered a shop over the course of a few weeks might seem innocuous, but if that shop contains a Post Office branch and elderly individuals are visiting at regular weekly intervals, anyone in possession of the data could work out when these individuals are collecting their pensions. This kind of nuance must be fundamental to all considerations of data release, but is ignored by the current draft code of practice for anonymisation.

Anonymisation is one method of protecting data, but it should be seen as part of a wider process of "disclosure control". Even the term "anonymisation" is fundamentally problematic, as it suggests a black-and-white distinction that barely exists in theory, and not at all in reality.

Fundamentally, disclosure control is a specialist area requiring some expertise or experience. Expecting non-specialists to be proficient in potential matches is problematic due to the 'observer bias' -  the perspective of the data controller. The code should require a proportionate, operationally independent review of disclosure control decisions and their environment. The code also includes too little data, and too few references to further materials. Furthermore, the inclusion of the section on spatial data suggests that is the only type of data that raises concerns, which is not the case.