Beyond The Hype: The Big Issues In The European Court’s 'Right To Be Forgotten' Ruling

News & Analysis
delete button

Since the European Court of Justice in May ruled in the “right to be forgotten” case, there has been a dizzying amount of debate about the decision, and its implications for privacy and free expression.

A main thread within these discussions is an old story that US Industry loves to tell and has told for some time: Europeans love privacy law, and Americans love free speech, and the twain shall never meet.

The Google Search case at the European Court of Justice has fuelled this view: Europeans, with their ridiculous over-regulation and hypersensitivity to data protection, have now legitimised censorship by ordering Google to remove search results to protect individual’s privacy. American lawyers and non-European privacy commissioners have become strange bedfellows in fuelling claims that this amounts to censorship.

European journalists and media outlets have played into Google’s hands, as some are claiming that the decision has resulted in Google casting journalists “into oblivion” with articles being “scrubbed” and sent “down the memory hole”.

And then of course there is Google itself, which is doing its best to stir up fear around the decision by claiming that the CJEU ruling “will be used by governments that aren’t as forward and progressive as Europe to do bad things” and has been accused of sabotaging the decision by being overly responsive to requests received since the decision. As privacy scholar Paul Bernal noted, such a response has created “an atmosphere in which people feel more censored.”.

This fear-mongering has obscured the true nature of what is a rather straightforward legal judgement which has incredibly complex implications. We have already explained the reasoning behind the decision and here, we look beyond the hype to tease out the big issues and challenges underpinning the ruling.

 

Why is this so controversial?

If any institution held information on any of us and we did not believe it should hold that information, we have the right to request its deletion. This principle in itself is uncontroversial. It is only when we seek to apply this principle to a near-universal search engine – as the Court of Justice of the European Union (CJEU) did – that the difficulties (and the controversy) arise.

At the heart of this debate is the question of how we as society view search engines. To many of us, they are unbiased and disinterested gateways to the wider world of the internet from which we seek to learn, engage, and enjoy. To others, such as researchers and journalists, a search engine is a source of intelligence that helps to reveal information hidden in plain sight amongst billions of datasets.

But we cannot forget the other sides to these search engines. Search engines are not neutral tools – their response to our search queries is shaped by what they know about us, what we have previously searched for or even where our computer is located.

Google and other major search engines make money from their services based on their search, they comply with a wide variety of requests to alter the results of searches, and they present information in response to search requests in tailored ways.

The era of an ‘objective’ search engine is over, if it in fact ever existed.

These popular search engines are not pure services for the public good. Search engines have long had rules and laws and power and influence applied against them, the companies that run them have held their own interests too, and this has never been for the goal of the protection of individual rights.

Search engines can also be tools to enable the more invasive within society, such as employers and investigators, to ferret out information about people, be they prospective employees or targets of interest. Companies and governments have long seen search engines as gateways to information they don’t want us to access, and have thus long worked with search engine companies to block us.

Finally, there is the issue of how we as individuals in a modern society actually use search engines. We have all done ego-searches, seeking to learn of our own visibility on the internet, or have also searched for other people we know and don’t know. Search engines are a source of intelligence for us all. And for some of us, they are a source of concern. At PI we have received notes from people over the years who have sought protection from their search results – that put them in an unfair light, that no longer represent who they are as people, and the consequence is that, amongst other things, the search results prevent them from gaining employment.

So the question of how we apply that important principle – that we should have a right to request certain information about us to be deleted from an institution’s databases – to online searches is a complex one. As a society we have long restricted what can be learned about people. We have placed rules on the retention periods of even potentially useful information, such as criminal records, because we believe it is for the betterment of society and of implicated individuals that information not exist in perpetuity. We have established privacy laws to restrict what institutions, both public and private sector, can do with our personal information.

These data protection laws place obligations on these organisations to make sure they protect our rights. This includes deleting information when it is no longer relevant or necessary.

 

Is the decision about the “right to be forgotten”?

The judgement hardly mentions the so-called “right to be forgotten”. Rather it makes reference to existing rights under EU data protection law.

Privacy principles have long contended that if some institution holds information on you, then you retain rights about how that information is used, and it must abide by a set of fair information practices. This is the very set of principles that protect your government from keeping secret files on you, and only use information for specific purposes. Similarly, any company or institution processing personal information must ensure that the information is adequate, relevant, or not excessive, kept up to date, and kept only for as long as necessary. So, in the European Court of Justice case, when Mr Gonzalez asked Google for links to be removed regarding his previous financial state, he was arguing that these were inadequate, irrelevant or no longer relevant, or excessive in light of the time that has elapsed.

The Court’s decision is quite clear: first, there is a legal framework that protects privacy along very specific principles. Second, a company that sells services in a jurisdiction has to abide by the laws of that jurisdiction. Third, this means that the company and its services have to abide by long-established privacy principles. The Court took the view that search engines allow for a detailed profile of an individual, and as such, should follow the law.

 

Does it mean that newspapers will be censored?

No. The European Court was clear, as were the Spanish courts, that this ruling did not affect the publication of the material in the first place. It only affects the search results to queries that include the name of the individual contesting the result.

 

Does it mean that some information will be no longer accessible on the internet?

No. The information will still exist on the internet, but how it is discovered will be different (and in many circumstances, more difficult).

If an individual asks for his or her personal information to be removed from a search results index, and the search engine complies, that information will no longer be listed in response to a query to that search engine that contains that person's name. But it will be listed in response to other queries. It will still be on the website and searchable on the website of the news organisation that originally posted the information.

So, for example if John Smith wants a search engine to remove a link to university newspaper article from 1997 about a plagiarised law journal submission, and the search engine agrees, that article will no longer be returned in response to the search engine query “John Smith law journal” or the like, but will be returned in response to the query “law journal plagiarised 1997 university”. The story will be searchable on the university newspaper website, under John Smith’s name.

 

Doesn't this place a lot of power in the hands of search engines – particularly Google?

Yes – the ruling undoubtedly gives rise to real concerns about how search engines like Google are to appropriately assess claims made by individuals and to act as arbiter of what is “in the public interest”.

Part of the difficulty is that the CJEU ruling empowers individuals to seek the removal of information on the basis of essentially subjective factors – where data is inadequate, irrelevant or excessive in relation to purpose for which Google is processing the data, or is not kept up to date or is kept for longer than is necessary to be kept for historical statistical or scientific purposes. It is an incredibly difficult thing to ask a search engine to determine the relevance of a piece of information, particularly given the multitude of differing interests that different parties have in accessing the information over the internet.

However, the ruling does not require Google to determine generally what information is relevant to remain on the internet and what isn't. It only has to narrowly determine whether specific information should be delinked from a particular individual's name when a search is done on the basis of that name, and only after an application by that individual and a consideration of whether it is in the public interest to refuse that application.

Google has been quick to highlight how difficult the judgement is to implement in a practical sense. However, it should be remembered that it is in Google's interest to oppose and mis-represent the judgement. Google has long opposed any European regulation which makes demands of them: the company has spent a good deal of time in Brussels over the past two years opposing proposed additions to the European data protection framework that protects individual privacy rights.

The CJEU decision is a good opportunity for them to oppose and influence a similar provision in the revised draft European Union Data Protection Regulation, currently making its slow progress through the EU Council (i.e. the Member States are still discussing it). There are signs of Google’s success in this respect; the UK Justice Minister has already announced he will fight any such ‘right to be forgotten’ provisions in the new legislation. 

Google’s decision to set-up an external advisory council and take it on a road-show around Europe in order to determine “how one’s person right to be forgotten is to be balanced with the public’s right to information” could be justified on the grounds that the Court did not say how that balance should be struck and who should make that decision. Whether the many expert opinions will muddy the waters even further or result in crystal clear clarification remains to be seen. 

 

What are the future implications of the decision?

This decision is actually quite significant beyond the search industry. It affects not just search engines but any organisation that brings together information generated by other parties – whether that be government or other industries (e.g. credit agencies). They too must comply with the law.

Much of the conversation to date has centred on the removal of links which correspond to news articles, thus fuelling a discussion about the free expression and journalistic implications of the decision. But consider when the ‘source’ of information isn't just the news articles but the actual data sets. What if the links in question are links to naked photos of an individual, or news stories about naked photos? With the growing emphasis on Open Data, what if there was a data set that Google could search through that included our personal information and that appeared within search results about us, then shouldn't we have the right to demand that it does not appear? What if the information was our academic results from school? Medical information? Past activities? Should that information be forever  linked to us? Do we need to reconceptualise how our data trail may forever follow us around or should we try to develop mechanisms that allows the individual to call to question how this may or may not occur?

At PI we believe that the individual should be an active participant in how his or her information is used, and we should all be developing systems and procedures to involve the individual in these decisions.

Given the depth of complexity of the challenges raised by these questions, it may be that we need to further think through the unique role that Google and other major search engines play in our society, and whether a different set of regulations should be developed to deal with these challenges. However, we believe that the real long term solution to this problem is the speedy adoption of the overdue revised EU data protection legislation, with the provision of a clear, well-written, and practically implementable “right to erasure” for individuals. Governments currently negotiating the new laws should move speedily to implement them, rather than delay and undermine. 

In the short term, data protection authorities should issue detailed guidance, with clear criteria to be followed by search engines, when requests for removal of links are made. European authorities have recently announced agreement on a  ‘tool-box’ to handle complaints resulting from search engines’ refusals to de-list. The same ‘tool-box’ must be made available to adjudicate the primary requests to Google, and criteria used published for all to see. The Court has decided that de-listing requests should go to Google in the first instance, and authorities will handle complaints only; this is the usual process in redress systems, for example complaints against ISPs or banks.

The worry in this case is that many data protection authorities are not well resourced and complaint investigations already take a long time to solve; they may also not be entirely neutral, at least from a freedom of information perspective. So a possible solution may well be to follow other sectors’ redress models and create a pan-European “Internet Ombudsman”, jointly financed by all the authorities.