Privacy International’s remarks at the side event of the 61st Session of the UN Human Rights Council on the Human Rights Impacts of Using Artificial Intelligence in Countering Terrorism

AI in counter terrorism drives intrusive surveillance and unreliable risk scoring that can wrongly target individuals. Strong safeguards are essential because these systems amplify privacy harms and discriminatory outcomes.

Key points
  • Intrusive data collection for counter terrorism AI poses significant privacy risks.
  • Risk models can turn ordinary people into suspects without solid grounds.
  • Discriminatory outcomes arise when AI reflects biased data and lacks firm safeguards.
Advocacy
security guard next to screens

Image by TungArt7 from Pixabay

Artificial intelligence is increasingly being deployed in counter-terrorism efforts, from predictive risk scoring to automated surveillance. This raises urgent questions about what this means for privacy, equality, and due process. 

At a recent UN event examining a new position paper of the UN Special rapporteur on counter-terrorism and human rights on the human rights risks of AI in counter-terrorism, Privacy International spoke to the core dangers these systems pose: that AI-driven tools drive intrusive surveillance and generate unreliable risk assessments that can wrongly flag individuals as threats.

These are both technical failures and human rights failures, with discriminatory outcomes baked into the design and deployment of systems that affect people's liberty, fair trial rights, and access to remedy.

Read Privacy International's full statement below:

Image by TungArt7 from Pixabay

Privacy International (PI) welcomes the Position Paper on the Human Rights Impacts of Using Artificial Intelligence in Countering Terrorism. We believe it is a very timely initiative, given the increase interest and use of AI technologies in counter terrorism.

I would like to focus my remarks on the risks to the right to privacy of the use of AI in counter-terrorism.

There are features of existing AI technologies that expand, intensify or incentivise interference with the right to privacy, most notably through increased collection, analysis, retention and sharing of personal data.

Specifically in the context of counter-terrorist measures, the processing of data by AI systems further amplifies risks of human rights abuses.

Firstly, the processing of vast amount of personal data in an indiscriminate and untargeted fashion raises concern of mass surveillance and questions about compliance with the principles of necessity and proportionality.

For example, processing personal data from private vendors and then analyse it at scale with AI to build profiles of individuals, their loyalties, movement patterns in physical space (the data they can get includes GPS data, etc), and much more.

Secondly, the consequences of AI ‘decisions’ based on data processing can lead to serious interference with other human rights, such as right to liberty and freedom of movement. In counter-terrorism context, predictions, assessments and ‘decisions’ made by or with the support of AI technologies turn individuals into suspects. However, AI assessments by themselves should not be seen as a basis for reasonable suspicion due to the probabilistic nature of the AI predictions.

Third, AI technologies have been used in ways that exacerbate discriminatory practices, particularly when they are used to infer characteristics and future behaviours on the basis of race, ethnicity, religion, or other status. They can perpetuate or even enhance discrimination, for example by reflecting embedded historic racial and ethnic bias in the data sets used, such as a disproportionate focus of policing of certain groups.

Recently we have witnessed more examples of the blurring lines between law enforcement and military operations in the context of counter-terrorism. AI-powered surveillance and targeting systems operated by the military are designed to identify, generate, and sometimes counter perceived terrorist threats. By analysing vast datasets for behavioural patterns or other indicators deemed suspicious, these systems designate individuals as potential threats and enable defensive responses to counter them. By design, such systems cannot function without extensive surveillance infrastructures that continuously monitor entire populations. Only through this constant surveillance can they identify deviations from what is labelled “normal” behaviour.

A similar logic underpins the rise of a new generation of military AI designed to gather, integrate, and analyse diverse data streams in real time to produce dynamic threat assessments. These systems draw from satellite imagery, drone footage, biometric data, and even social media activity, harvest information across civilian and military environments alike.

By way of conclusion, proponents of AI tend to overstate AI capabilities as well as its cost effectiveness. I would argue that this is not accurate, at least in the context of using AI for counter-terrorism.

Let’s not forget that AI makes mistakes.  There is an inherent uncertainty in the fact that AI algorithms are probabilistic. And unrealistic expectations can lead to the deployment of AI tools that are not equipped to achieve the desired goals.

Further, there needs to be enhanced human rights safeguards throughout the AI lifecycle in order to address the challenges I just mentioned. These safeguards are both costly and time consuming but necessary if we want AI to be human rights compliant.

For sake of time, I will just list three of these safeguards:

First, the UN General Assembly has called on states to refrain from the use of AI technologies that are impossible to operate in compliance with international human rights law.

The fact that AI is deployed for purposes of counter-terrorism should not void the human rights safeguards applicable to such technologies. In fact, given the enhanced human rights risks, it should lead to more stringent limits and controls. 

Specifically, any interference with the right to privacy, including processing of personal data, requires a legal basis and must be limited to what is necessary and proportionate to a legitimate aim.

Applied to AI technologies, this means requiring that data should be collected and processed only for specific purposes and the amount of data processed should be kept to the minimum required. It also means assessing if less invasive approaches could achieve the same results.

Second, modern data protection laws have quite well-developed standards for transparency and accountability, which must apply to AI systems. This includes overall prohibition (with narrow exceptions) of solely automated decisions when such decisions have legal or other significant effects. In this context it is concerning that in many jurisdictions, intelligence and law enforcement agencies are excluded from the provisions of data protection legislation. That is a gap that needs addressing if AI technologies are to be used in counter-terrorism. 

Third, in carrying out human rights due diligence prior to the deployment of AI systems, national authorities must include privacy impact assessment and develop a privacy by design and by default approach. Particularly in counter-terrorism, measures should not be assessed in isolation, but consider the cumulative effects of interacting measures. For example, before deciding to deploy new AI-based surveillance tools, a government must assess the existing surveillance capacities and their effects on the right to privacy and other rights.