Privacy International and others file legal complaints across Europe against controversial facial recognition company Clearview AI

Privacy International, together with three other organisations has filed a series of legal complaints against Clearview AI, Inc - the facial recognition company that claims to have “the largest known database of 3+ billion facial images”.

Key points
  • PI together with Hermes Center for Transparency and Digital Human Rights, Homo Digitalis and noyb - the European Center for Digital Rights has filed a series of legal complaints against Clearview AI, Inc.
  • The complaints were submitted to data protection regulators in France, Austria, Italy, Greece and the United Kingdom.
  • Clearview AI searches the web and collects any images that it detects as containing human faces and sells access to this database to private companies and law enforcement globally.
  • PI and other argue that Clearview AI's business model violates European data privacy laws.
Press release
A police officer using social media surveillance technology.

Privacy International (PI), together with Hermes Center for Transparency and Digital Human Rights, Homo Digitalis and noyb - the European Center for Digital Rights, has today filed a series of legal complaints against Clearview AI, Inc. The facial recognition company claims to have “the largest known database of 3+ billion facial images”. The complaints were submitted to data protection regulators in France, Austria, Italy, Greece and the United Kingdom.

As our complaints detail, Clearview AI uses an “automated image scraper”, a tool that searches the web and collects any images that it detects as containing human faces. All these faces are then run through Clearview's proprietary facial recognition software, to build a gigantic biometrics database. Clearview then sells access to this database to private companies and law enforcement globally.

“European data protection laws are very clear when it comes to the purposes companies can use our data for” said Ioannis Kouvakas, Legal Officer at PI. “Extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users”.

“Clearview seems to misunderstand the Internet as a homogeneous and fully public forum where everything is up for grabs”, said Lucie Audibert, Legal Officer at PI. “This is plainly wrong. Such practices threaten the open character of the Internet and the numerous rights and freedoms it enables”.

Regulators now have 3 months to respond to the complaints. We expect them to join forces in ruling that Clearview’s practices have no place in Europe, which would have meaningful ramifications for the company’s operations globally.

Note To Editors

Clearview became widely known in January 2020, when a New York Times investigation revealed its practices to the world. Prior to this, Clearview had operated with intentional secrecy, while offering its product to law enforcement agencies in various countries as well as to private companies.

The five submissions, some of which also build on data subject access requests submitted by individuals, add to the series of investigations launched in the wake of last year’s revelations. "Just because something is 'online' does not mean it is fair game to be appropriated by others in any which way they want to - neither morally nor legally. Data protection authorities need to take action and stop Clearview and similar organisations from hoovering up the personal data of EU residents", said Alan Dahi, Data Protection Lawyer at noyb.

Clearview has also been reported to have entered into contracts with law enforcement authorities in Europe. In Greece, following a query submitted by Homo Digitalis, the police has denied collaboration with the company. "It is important to increase scrutiny over this matter. The DPAs have strong investigative powers and we need a coordinated reaction to such public-private partnerships”, said Marina Zacharopoulou, Lawyer and member of Homo Digitalis.

Due to its extremely intrusive nature, the use of facial recognition systems, and particularly any business model that seeks to rely on them, raise grave concerns for modern societies and individuals' freedoms. Last month, the Italian data protection authority blocked police forces from using real time facial recognition. “Facial recognition technologies threaten our online and offline lives,” said Fabio Pietrosanti, President of the Hermes Center. “By surreptitiously collecting our biometric data, these technologies introduce a constant surveillance of our bodies.”

Individuals in the EU can ask the company if their face is in Clearview AI’s database and request that their biometric data is no longer included in the searches that the company’s clients perform. PI has provided general guidance on how to exercise your data access rights.