
Image by Jamillah Knowles & Digit from https://betterimagesofai.org under https://creativecommons.org/licenses/by/4.0/
There is an urgent need for the regulation of facial recognition technologies in the UK to protect people from the grave risk it poses to human rights. In light of this, we conducted research to see how other states and jurisdictions are regulating biometric technologies.
Image by Jamillah Knowles & Digit from https://betterimagesofai.org under https://creativecommons.org/licenses/by/4.0/
Across the world facial recognition technologies (FRT) are increasingly being deployed in public and private spaces without adequate laws or regulation to protect individuals from the grave risks they pose to human rights. States rely more and more on this technology for public mass surveillance, enabling an authoritarian omnipresence over people’s activities, movements, and expressions at all times, often without them knowing.
Over the past few years we have raised concerns about the rapid expansion of the use of FRT within the UK by law enforcement and the private sector, directly encouraged and enabled by the UK government. These developments are playing out within a democratic vacuum without any specific law in place pertaining to the use of FRT. To this end we launched our campaign, the 'End of Privacy in Public', to inform the public and parliamentarians of these harms and conducted wider advocacy around the need to introduce specific legislation to impose the necessary restrictions and safeguards on the use of FRT.
We have also been monitoring how other states and jurisdictions around the world are taking steps to regulate biometric technologies to address harms and mitigate risks. The UK is now an outlier. Its European neighbours have introduced the European Union (EU) Artificial Intelligence Act (AI Act) which places restrictions on the use of biometric technologies including FRT across Europe. Although not without its flaws, it stands as the most concrete example of FRT regulation. We have also seen developments across the United States (US) with individual states introducing prohibitions and/or specific safeguards on FRT. Here we reassert the need for regulation of FRT within the UK, whilst reflecting on regulatory developments around the world as potential guidance.
Throughout the UK, FRT is used in a variety of ways, be it for policing purposes, by the private sector in retails spaces or even on public transport, with little to no public consent or oversight. Instead we are observing law enforcement and the private sector draw up their own rules and deploy FRT in ways that interfere with our human rights.
London is turning into one of the most heavily surveilled capitals in the world and other cities are witnessing several experimental uses of facial recognition technologies (FRT). London's Metropolitan Police Service (the Met) have reportedly scanned around 1 million faces so far in 2025 and 4.7 million faces in 2023. The Met has also announced that it will install permanent live facial recognition (LFR) cameras in the summer of 2025 in Croydon, South London. To this end it seems we will see facial recognition cameras mounted on street furniture. In Cardiff, South Wales, police have also expanded their live facial recognition cameras to international sports events, effectively turning social gatherings into surveillance experiments. This temporary but sweeping setup drastically increases the number of unsuspecting people being scanned.
UK police officers have also begun using operator-initiated facial recognition directly on their smartphones providing them with excessive power to instantly identify people on the spot.
The use of FRT is not limited to law enforcement agencies. Private sector entities including banks, supermarkets, nightclubs, gyms, and more have increasingly adopted FRT for purposes such as age verification, crime prevention, access control, and workplace monitoring. These deployments operate with minimal transparency and oversight, potentially discriminating against marginalised communities with lower‑income areas disproportionately targeted.
Police forces justify these uses of FRT based on common law policing powers, asserting generic data protection legislation is sufficient to regulate its use. Private sector use remains almost completely unchecked. Furthermore, our campaign revealed that UK MPs had an astonishing knowledge gap showing that many MPs were sorely misinformed, or entirely unaware if FRT is being used in their constituency. They lacked insight into the threats FRT poses or whether there is an appropriate legal framework in place.
However, we are slowly starting to see some progress. In November 2024 UK MPs held the first parliamentary debate on police use of live facial recognition technology since FRT was initially deployed by the Met in August 2016. Furthermore, in July 2025 the UK Home secretary Yvette Cooper acknowledged that the UK government intend to create “a proper, clear governance framework” to regulate the use of facial recognition. However, it is unclear when this framework will be published and whether it will be statutory.
Legislation has never been so urgently needed in the UK to establish clear rules and restrictions on certain uses and types of FRT for the protection of our fundamental rights.
The broad range of human rights concerns raised by FRT have been well documented. These range from interferences with the rights to dignity and non-discrimintation, to the rights to privacy, freedom of expression, freedom of assembly and association. The omnipresence of constant surveillance opposes the core notions of of self-discovery, fulfillment and democratic values in a society to which we all contribute.
There is strong evidence that FRT can have discriminatory results. For instance, it misidentifies individuals more often if they are women, people of colour and people from minority and ethnic backgrounds, revealing underlying racial and gender biases in the system. These bias and discrimination concerns can have devastating consequences for individuals, especially those belonging to marginalised groups. For instance, unreliable FRT evidence has been used to support criminal investigations and proceedings, such as in Argentina where a man was accused of committing robbery in a city 600 kilometers away from where was at the time.
Mitigation of these human rights risks is urgently needed within specific legislative and regulatory frameworks for the use of biometric technologies. When it comes to examples of legal frameworks for biometrics across the world, the global picture is unfortunately limited. However, we are starting to slowly see the development of legislation that prohibits or restricts aspects of FRT. Among those are the EU AI Act, and State level legislation across the United States.
The EU AI Act is the first comprehensive legal framework regulating artificial intelligence. It entered into force on 1 August 2024 and will become fully applicable on 2 August 2026. However, rules concerning prohibited AI practices and AI literacy obligations have been in effect since 2 February 2025. The Act classifies AI systems into different risk categories based on the level of risk they pose, establishing specific legal obligations or outright prohibitions accordingly.
AI systems deemed to pose "unacceptable risk" are banned under the Act. These include systems used for social scoring, manipulative or deceptive AI applications, emotion recognition in workplaces and educational settings, live biometric identification for law enforcement in publicly accessible spaces, and the indiscriminate collection of internet or CCTV data to build or expand facial recognition databases.
AI systems considered "high-risk" must comply with strict legal requirements before they can be marketed or deployed. These obligations include conducting risk assessments, using high-quality datasets to avoid bias, maintaining activity logs, providing detailed documentation, ensuring user transparency, implementing human oversight, and meeting high standards of robustness, cybersecurity, and accuracy.
As detailed further below, the Act establishes a general ban on the use of live facial recognition and other forms of biometric surveillance by law enforcement in public spaces, while retrospective facial recognition technology (FRT) is regulated as a high-risk AI system and subject to stringent compliance measures.
Live FRT is prohibited under the EU AI Act for law enforcement purposes, unless it is is strictly necessary for one of the following objectives:
- the targeted search for specific victims of abduction, trafficking in human beings, or sexual exploitation of human beings, as well as the search for missing persons;
- the prevention of a specific, substantial, and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;
- the localisation or identification of suspects or perpetrators of exhaustively listed crimes.
The existence of domestic legislation is a prerequisite for its lawful use. Law enforcement agents cannot rely on the EU AI Act to use FRT without domestic laws authorising and regulating such use. Moreover, the use of live FRT for the three objectives listed above is subject to additional safeguards and conditions:
- First, its use is only allowed to “confirm the identity of a specifically targeted individual”.
- Second, authorities are required to conduct a mandatory fundamental rights impact assessment. The assessment must also show whether less intrusive alternatives exist for law enforcement use.
- Third, each individual use of live FRT system requires prior authorisation by a judicial authority or an independent administrative authority whose decision is binding, except for justified urgencies.
- Four, the use of live FRT must be strictly limited by location, duration, and targeted individuals to meet the requirement of strict necessity. Unrestricted uses, such as in public streets for general security or crowd control amounts to mass surveillance and is prohibited.
- Five, no decision that produces an adverse legal effect on a person may be taken based solely on the output of live FRT. Further checks are needed in cases where the only evidence relied on is produced by live FRT.
- Six, deployers of live FRT systems in public spaces for law enforcement purposes are obliged to register the system in the EU database provided.
The EU AI Act classifies retrospective FRT for law enforcement purposes as a high-risk AI system. Consequently, retrospective use of FRT used in an investigation for the targeted search of a person suspected or convicted of having committed a criminal offence will be subject to additional conditions and safeguards from 2 August 2026. These include:
- judicial or administrative authority's binding authorisation;
- strict necessity for a specific criminal offence;
- prohibition of indiscriminate surveillance;
- no adverse legal effect solely based on FRT, and;
- documentation of each use for reporting and analysis purposes.
In addition to requirements under the EU AI Act, the EU Law Enforcement Directive (“LED”), Directive 2016/680, applies to retrospective FRT as it involves processing of special categories of data. The LED stipulates that special categories of personal data can be processed only where strictly necessary, subject to appropriate safeguards for the rights and freedoms of the data subject. Moreover, the processing of special categories of data may only take place if it is authorised by law, aims to protect the vital interests of the data subject or of another person, or when the processing relates to data which is manifestly made public by the data subject. According to the European Data Protection Board, a legislative measure cannot be invoked as a law authorising the processing of biometric data by FRT for law enforcement purposes if it is a mere transposition of the general clause in Article 10 LED.
FRT works by comparing an image of an unidentified person to a database that contains images matched with identities. The EU AI Act prohibits the 'placing on the market', 'the putting into service for this specific purpose' or 'use' of AI systems that indiscriminately scrape images from the internet or CCTV footage in order to create or expand facial recognition databases. The prohibition does not only cover databases whose sole purpose is to be used for facial recognition. It is sufficient that the database can be used for facial recognition.
According to the European Commission's Guidelines, the prohibition does not apply to any scraping tool with which a database for face recognition may be constructed or expanded, but only to tools for untargeted scraping. The term 'untargeted’ means absorbing as much data and information as possible, without targeting specifically and individually intended subject(s) of the scraping. The Guidelines provides examples of the term ‘scraping’, such as "using web crawlers, bots, or other means to extract data or content from different sources, including CCTV, websites or social media, automatically."
The Council of Europe also has drafted guidance for legislators and decision makers to ensure that such existing databases initially used for other purposes “can only be used to extract biometric templates and integrate them into biometric systems when it is necessary for overriding legitimate purposes, provided for by law and strictly necessary and proportionate to these purposes".
The EU AI Act introduces a comprehensive and legally binding framework regulating the use of FRT across Member States. Crucially, the Act classifies the use of live FRT in public spaces as a prohibited practice, except for a narrow set of clearly defined exceptions examined above. Member States must adopt national legislation in full alignment with the EU AI Act before any use of live FRT can be considered lawful. Retrospective FRT has been categorised as a high-risk AI system. As such, it is subject to strict legal safeguards and oversight requirements to protect fundamental rights, particularly the right to privacy and data protection.
However, the early implementation stages of the EU AI Act have prompted mixed responses and revealed uneven levels of compliance among Member States. Troublingly, some national laws appear to contravene the Act by extending FRT use to investigations of minor offences, or by facilitating its use in ways that risk targeting marginalised communities and civil society groups.
At the same time, several stakeholders have argued that the Act could have gone further, especially in regulating retrospective FRT. For example, Austria has raised concerns that retrospective FRT used by law enforcement constitutes a serious interference with individuals’ fundamental rights. Accordingly, Austria underlined that it should have been included in the list of prohibited practices of the EU AI Act.
As implementation progresses, it is essential that Member States' compliance with the EU AI Act is closely monitored. The coming months and years will be critical in ensuring that the protections envisioned by the regulation are not only enshrined in law but fully realised in practice.
Across the US there are a variety of examples of measures introduced to curtail the use of FRT, such as limiting law enforcement use of FRT to specific crimes; safeguards for fair trial and due process; requiring judicial authorisation before deployment; introducing the role of supervisory authorities to provide oversight of its use; data protection safeguards; and regulation around the use of databases for FRT. A federal framework may be absent, but at least sixteen states have passed facial-recognition specific regulation as of July 2022, with others having comprehensive police surveillance regulations that also apply to facial recognition.
Several US states have outright prohibited the certain types or uses of FRT. In Virginia, live tracking of an individual in a public space is prohibited. In 2020, California’s legislature passed a three-year bill (which expired in January 2023) that prohibited law enforcement agencies or a law enforcement officer from installing, activating, or using any biometric surveillance systems in connection with an officer camera or data collected by an officer camera. In August 2024, California also rejected a bill that would authorise FRT use by law enforcement.
In Washington, state and local government agencies are prohibited from using facial recognition technology to target individuals based on their religion, political or social views or activities, participation in lawful or noncriminal organizations or events, or any legally protected characteristic—such as race, ethnicity, citizenship, national origin, immigration status, age, disability, gender, gender identity, or sexual orientation.
Limitations on uses of FRT
Vermont has implemented legislation regulating the police's use of drones enabled with FRT. Unless otherwise approved by the legislature, law enforcement agencies may not use FRT or information derived from it with the sole exception of investigating child sexual exploitation. Similarly, the State of Maine allows the use of FRT by public authorities for limited purposes only, such as investigating a serious crime or assisting in the identification of a deceased or missing person. In the city of New Orleans, the use of FRT is only allowed for investigating persons who have previously committed crimes, and limited to the investigation of specific, enumerated crimes listed in the criminal code of the State of Louisiana. FRT cannot be used for investigations relating to abortion or consensual sexual acts.
When FRT is deployed without proper safeguards, individuals under investigation may not have sufficient insight to defend themselves against any allegations based on the technology. The Appellate Division of the Superior Court of New Jersey recently pronounced that where the State intends to rely upon the use of novel technologies, the defendant is entitled to access discovery related to novel technology, especially “given the fact FRT is novel and untested, and the possibility that errors in the technology may exculpate the defendant”.
In New Orleans, the use of FRT requires judicial approval on a case-by-case basis, and mandates a public hearing to review the effectiveness of this ordinance by the Criminal Justice Committee, among other strict conditions. In Washington, FRT can only be used by a state or local government agency if it: (1) obtains a warrant, (2) exigent circumstances exist, or (3) if a court order is obtained authorizing the use of the service for exhaustively listed purposes.
In Virginia, the Department of State Police, local law enforcement agencies, and campus police departments are prohibited from creating a database of images using a live video feed for the purpose of using facial recognition technology. The State of California rejected a bill that would have authorized creation of a database state photo records, including from drivers licences and arrest photographs taken by law enforcement, for the purposes of using facial recognition.
Colorado’s 2022 Bill is exemplary in respect of the accountability principle of data protection. It requires the agency deploying FRT to produce an accountability report that encapsulates existing policies, training procedures, description of potential impacts on civil rights and liberties, data management policies, mechanisms to receive feedback from affected populations, and disclosure of complaints and reports of bias and false matches, among others.
The Office of the Australian Information Commissioner recognises the “very real community concern around the privacy risks associated with FRT”. The Office considers it best practice to undertake a privacy impact assessment, which should encourage deployers to adopt a privacy by design approach to their use of FRT.
As is the case in the UK, and despite the above examples, there are many jurisdictions where FRT is deployed without the sufficient legal basis and safeguards. Research indicates a concerning global rise in the use of facial recognition technologies, with approximately 75% of governments deploying FRT on a large scale, and approximately 75% of police forces globally having access to some form of FRT. It does not only operate on the streets. FRT appears more and more in banks, airports, public transportation schools and workplaces, posing distinct legal challenges within each context. For instance, around 40% of countries have reportedly introduced FRT in workplace settings, often for purposes such as employee attendance tracking, access control, and security monitoring.
The following examples provide a glimpse into the troubling use of FRT in the absence of sufficient safeguards and policies. This growing global trend demands urgent attention and action.
In Brazil, there is no general or tailored legislation or regulation on the use of FRT, including by law enforcement agencies, yet its use is wide spread, including within educational settings, leading to serious human rights concerns. An increasing number of Brazilian cities, such as Curitiba, Salvador, Porto Alegre, and Brasília have adopted FRT to combat ticket fraud in public transportation.
Canada provides a similar example, where the government has not yet taken the necessary steps to restrict use of this new technology. Instead, law enforcement agencies have invested in “technologies powered by artificial intelligence, with few controls and virtual impunity". Canada, along with the United States and Australia, has reportedly been increasingly implementing FRT in schools.
Russia has installed more than half a million FRT cameras across the country, with Moscow accounting for more than 200,000 cameras. In 2018, three-quarters of Moscow’s public space and 90% of residential areas were already covered by surveillance cameras. In recent years, there have been multiple instances where FRT has been used to identify and arrest protesters, raising serious concerns about surveillance and human rights.
China is a global actor that deploys FRT for a variety of purposes. It builds not only a national surveillance architecture but also implements a social credit system. FRT has been used for the smallest of offences, such as jaywalking, which can affect one's credit score.
The unchecked proliferation of FRT in the UK and across the world, particularly deployment by law enforcement, raises grave concerns about privacy, accountability and human rights. Despite clear evidence of bias, inaccuracies, and the technology’s potential to erode democratic freedoms, its use continues to expand without proper legal framework and public knowledge. The UK's failure to pass primary legislation specifically regulating FRT and implementing robust safeguards contrasts with jurisdictions like the EU and parts of the US, where tighter controls and legal safeguards have been put in place. A lack of safeguards leaves individuals vulnerable to unwarranted surveillance and harm, especially those from marginalised communities.
In light of these concerns, PI reiterates its call to ban live FRT in public places by state and non-state actors. The introduction of live FRT would result in the normalisation of surveillance across all societal levels and accordingly cast a “chilling effect” on the exercise of fundamental rights, such as our freedom of expression and freedom of assembly.
Facial recognition also has no place in educational settings. Students are equally entitled to the right to privacy, and children cannot meaningfully consent to the use of these technologies.
All in all, the urgency to enact comprehensive legislation that bans certain use of FRT and imposes robust safeguards on the rest has never been greater. Otherwise, the UK risks leading the way to a surveillance society that undermines the very principles of privacy, freedom, and justice that underpin a democratic society.