10 threats to migrants and refugees
This article presents some of the tools and techniques deployed as part surveillance practices and data-driven immigration policies routinely leading to discriminatory treatment of peoplee and undermining peoples’ dignity, with a particular focus on the UK.
- Migrants are bearing the burden of the new systems and losing agency in their migration experience, particularly when their fate is being put in the hands of systems driven by data processing and so called tech innovations.
- Large amounts of data are being requested from migrants, from their fingerprints to their digital data trails, while they are often put in a situation of constant surveillance.
- Private military and security companies play essential roles in providing a variety of surveillance tech and data exploitation ‘solutions’and services to governments.
Over the last two decades we have seen an array of digital technologies being deployed in the context of border controls and immigration enforcement, with surveillance practices and data-driven immigration policies routinely leading to discriminatory treatment of people and undermining peoples’ dignity.
And yet this is happening with little public scrutiny, often in a regulatory or legal void and without understanding and consideration to the impact on migrant communities at the border and beyond.
These practices mean that migrants are bearing the burden of the new systems and losing agency in their migration experience, particularly when their fate is being put in the hands of systems driven by data processing and so called tech innovations. There is a need to demand a more humane approach to immigration based on the principles of fairness, accessibility, and respect for human rights.
In this article we present some of these tools and techniques widely used, looking at the situation worldwide and adding a particular focus on the UK.
1. Data Sharing: turning public officials into border guards
Increasingly every interaction migrants have within the immigration enforcement framework requires the processing of their personal data. The use of this data and new technologies are today driving a revolution in immigration enforcement which risks undermining people’s rights and requires urgent attention.
Large amounts of data are being requested from migrants, from their fingerprints to their digital data trails, while they are often put in a situation of constant surveillance. Life-changing decisions are being made on the basis of the data being collected but also inferred and observed, and yet there are limited safeguards in place to regulate and oversee the use of tech and data processing in immigration processes.
Launching the Care Don’t Share report, Liberty wrote:
In the UK, ss part of its discredited ‘hostile environment’ policy, the Government has set up a series of shadowy deals letting Home Office immigration enforcement teams access data – like personal addresses – collected by schools, hospitals, job centres, and the police and use it to track down children and adults for deportation.
People should be able to access essential public services – like sending their children to school, seeking medical care and reporting crime – without fear of immigration enforcement.
In view of existing and expanding data processing policies and practices for immigration purposes, there is an urgent need to regulate and monitor entities who undertake or are involved in the processing of migrants’ data to ensure they comply with internationally recognised data protection principles and standards, as well as human rights.
Immigration enforcement and border management authorities cannot be exempt from having to protect migrants and their data. This is why in 2019 Privacy International, and several migrant and digital rights organisations, joined a formal complaint filed by the Platform for International Cooperation on Undocumented Migrants (PICUM) against the United Kingdom for failing to respect, the General Data Protection Regulation (GDPR), by including an “immigration control” exemption in the Data Protection Act adopted in 2018.
2. Mobile Phone Extraction: your phone is fair game
Governments are increasingly using migrants’ electronic devices as verification tools often to corroborate the information they provide to the authorities. This practice is enabled by the use of mobile extraction tools, which allow an individual to download key data from a smartphone, including contacts, call data, text messages, stored files, location information, and more.
These practices constitute a serious interference with the right to privacy and are neither necessary nor proportionate. Also, the assumption that data obtained from digital devices leads to reliable evidence is flawed. If a person claims certain information is true, and there exists information on their smartphone suggesting otherwise, it is not evidence that they are being disingenuous. They are a variety of legitimate reasons why the data extracted would differ from the information provided by an applicant.
Germany, Denmark, Austria Norway, the United Kingdom and Belgium are among the countries where we have seen laws allowing for the seizure of mobile phones from asylum or migration applicants from which data is then extracted and used as part of asylum procedures.
These technologies are also used by local police: in March 2018, we uncovered that 26 out of 47 UK police forces used mobile phone extraction – and three were about to trial it for the first time.
3. Social Media Intelligence: what does a Facebook like say about you?
Over the last decade, we have seen governments across sectors including for immigration enforcement purposes resorting to social media intelligence (SOCMINT), the techniques and technologies that allow companies or governments to monitor social media networking sites (SNSs), such as Facebook or Twitter.
Some of these activities are undertaken directly by government themselves but in some instances, governments are calling on companies to provide them with the tools and/or knowhow to undertake this sort of activities.
In September 2010, Frontex, the European Border and Coast Guard Agency, published a call for tender to pay €400,000 to a surveillance company to "track people on social media.” After we asked whether Frontex had gone through the necessary checks to make sure their plan was legal, they decided to cancel the tender process.
Our recent report “Is your local authority looking at your Facebook likes?” looks at how councils and local authorities in Great Britain are increasingly using this technique as part of their intelligence gathering and investigation tactics in areas such as council tax payments, children’s services, benefits and monitoring of protests and demonstrations. Could you be a target?
4. Predictive Policing: a feedback loop that reinforces racial bias
Predictive policing programs are used by the police to estimate when and where crimes are likely to be committed – or who is likely to commit them. These programs work by feeding historic policing data through computer algorithms. For example, a program might evaluate data about past crimes to predict where future crimes will happen – identifying “hot spots” or “boxes” on a map.
But the data these programs use is incomplete and biased, leading to a “feedback loop” – sending officers to communities that are already unfairly over-policed. Other predictive policing programs may suggest how people will behave. These programs are fed information about a person, and then they decide whether that person is likely to commit an offence.
While we might be tempted to assume that computer programs and algorithms are neutral, this is not the case. The data that is fed into these systems is incomplete or based on human biases, leading to decisions that perpetuate pre-existing social inequalities. For example, mapping programs often send officers back to monitor the same over-policed communities again and again. As shown by several studies, data-driven policing can lead to racial profiling and reinforce racial bias in the criminal justice system.
In the UK, the use of predictive policing programs is not covered by any law or regulation. This makes it incredibly difficult to understand how these programs are used, how they come to decisions about us or our communities, and how we can challenge those decisions.
And yet, Police forces across the UK use or have used predictive policing programs:
- Kent police hit headlines with its use of PredPol, a program developed in the United States, which directs officers where to patrol based on predictive mapping software.
- Durham Constabulary is renowned for its Harm Assessment Risk Tool (HART), which assesses whether someone is likely to reoffend using crude profiling from data about a person’s family, housing and financial status.
- Avon and Somerset Police has reportedly used analytics platform
- West Midlands Police has trialled predictive policing tools
Much like for social media monitoring, knowing that our data is being collected and used to make decisions about us may ultimately lead to us censoring our own behaviour. For example, if our local community is a “hot spot” for policing activity, we may change where we go and what we do in our local area and even who we spend time with.
5. Lie Detectors: security on scientifically dubious grounds
There are few places in the world where an individual is as vulnerable as at the border of a foreign country.
The use of extraction tools is part of a broader trend of aiming surveillance and other security technology at asylum seekers and migrants, often on scientifically dubious grounds. In Europe, this includes the use of technology which supposedly identifies if a person is lying based on their ‘micro-gestures’, a person’s origin based on their voice, and their age based on their bones.
The European Union’s Horizon 2020 research and innovation programme has been funding a project called iBorderCtrl, defined as “an innovative project that aims to enable faster and thorough border control for third country nationals crossing the land borders of EU Member States”. In addition to other features, the system undertakes automated deception detection.
This is highly experimental technology whose results cannot be trusted, as reported by media investigations, and yet they are used to make life-changing decisions.
6. Border Externalisation: outsourcing border controls and surveillance
“Border Externalisation”, the transfer of border controls to foreign countries, has in the last few years become the main instrument through which the United States and the European Union (EU) seek to stop migratory flows.
It relies on utilising modern technology, training, and equipping authorities in third countries to export the border far beyond its shores.
Countries with the largest defence and security sectors are transferring technology and practices to governments and agencies around the world, including to some of the most authoritarian countries in the world. China, European countries, Israel, the US, and Russia, are all major providers of such surveillance worldwide, as are multilateral organisations such as the European Union. The surveillance industry is playing an essential role in the process.
Their involvement is enabled by the adoption of ad hoc funds, like the controversial “EU-Turkey deal”, an agreement which saw €6 billion given to Turkey in exchange for its commitment to seal its border with Greece and Syria, and the EU Trust Fund for Africa (EUTF).
This facilitates serious violations of human rights, reinforces authoritarianism, undermines governance, and drives corruption. It also diverts money and other resources away from development and other aid, instead giving billions of dollars to security agencies and surveillance companies.
7. Biometrics Processing: a feast of databases
As with many other sectors, we have seen the deployment of biometric systems in immigration and border management mechanisms. Biometric technology is provided by companies to serve a variety of purposes including in screening and/or determination of asylum as part of age and origin verification, as well registration, authentication and verification of identity.
Inconsistencies and errors in databases also result in large numbers of wrong identity identification. Failure rates in identification affects disproportionately people from some races, class and age groups.
In 2003, the Identification of applicants (EURODAC) was adopted and set-up an EU asylum central fingerprint database. It is to this central database that the fingerprints of any person seeking asylum over the age of 14 anywhere in the European are transmitted. It is used for fingerprint comparison evidence to assist with determining the Member State responsible for examining an asylum application made in the EU to ensure compliance with the Regulation (EU) No. 604/2013 (‘the Dublin Regulation’) which requires those seeking asylum to submit their claim in the first country of the EU they enter. Legislative negotiations are ongoing to expand this database, with the aim of gathering more personal data from more people and lowering the age of data collection from fourteen to six years of age.
As of August 2018, according to the United States Department of State International Narcotics Control Strategy Report for the year 2019, the biometric data sharing program between the governments of Mexico and the United States was active in all 52 migration processing stations in Mexico. The program uses biometric information to screen detained migrants in Mexico that have allegedly tried to previously cross the U.S. border or are “members of a criminal gang” .
There is also a lack of transparency in this process: for example, despite contradicting evidence, Mexico’s National Institute of Migration has denied processing biometric data in answers to freedom of access to information requests submitted by our Mexican partner R3D.
In the UK, not even a pandemic deters the government: asylum-seekers are still required by the Home Office to physically register their application for asylum at selected venues. This requirement persists even in the face of legislation allowing for the suspension of biometrics collection in the event of a public health emergency.
- Read the report “Data Protection, Immigration Enforcement and Fundamental Rights: What the EU’s Regulations on Interoperability Mean for People with Irregular Status” by our partner Statewatch and PICUM
8. Facial Recognition: making surveillance frictionless
Facial recognition typically refers to systems which collect and process data about a person’s face. Such systems are highly intrusive because they rely on the capture, extraction, storage or sharing of people’s biometric facial data.
In the context of policing, facial recognition can capture individuals’ facial images and process them in real time (“live FRT”) or at a later point (“Static” or “Retrospective FRT”). The collection of facial images results in the creation of “digital signatures of identified faces”, which are analysed against one or more databases (“Watchlists”), usually containing facial images obtained from other sources to determine if there is a match. This image processing can be done with the purpose of either identifying someone at that moment, training the facial recognition system to get better at identifying, or feeding their face to the system for further uses.
The use of this technology by both police and/or private actors has a seismic impact on the way our society is policed or broadly monitored, in particular on communities of colour.
There are a lot more misidentification errors when it comes to minority groups which will then be more likely to be overpoliced by being wrongly stopped and questioned. For example, past facial recognition trials in London resulted in an error rate greater than 95 per cent, leading even to a 14-year-old black schoolboy being “fingerprinted after being misidentified”.
The rollout of such intrusive technology not only poses significant privacy and data protection questions, but also ethical questions around whether modern democracies should ever permit its use and to what extent.
Following extensive civil society campaigns, some cities are banning facial recognition and companies like Amazon, IBM and Microsoft have announced temporary moratoriums in their use of facial recognition - but these announcements seem more PR stunts made under pressure rather than a true commitment.
9. Artificial Intelligence: your fate in the hands of the system
The term ‘AI’ is used to refer to a diverse range of applications and techniques, at different levels of complexity, autonomy and abstraction. This broad usage encompasses machine learning (which makes inferences, predictions and decisions about individuals), domain-specific AI algorithms, fully autonomous and connected objects and even the futuristic idea of an AI ‘singularity’. This lack of definitional clarity is a challenge: different types of AI systems and applications raise specific ethical and regulatory issues. Different applications and uses of AI can affect the right to privacy and other fundamental rights and freedoms in different ways.
AI methods are being used to identify people who wish to remain anonymous; to infer and generate sensitive information about people from their non-sensitive data; to profile people based upon population-scale data; and to make consequential decisions using this data, some of which profoundly affect people’s lives. And these are just some examples.
New technologies have been deployed in immigration enforcement including AI and automated decision making in a variety of ways. These have included lie detectors at the border (see number 5), automated decision making about visitor visa applications, for the identification refugees or as part of digital border monitoring systems.
These practices mean that for many migrants their fate is being put in the hands of an automated system.
10. Private Companies: when the border is a good business
The use of powerful and intrusive technology that enables authorities to gather intimate details about people’s lives carries significant dangers.
In this context, private military and security companies have come to play essential roles in providing a variety of surveillance tech and data exploitation ‘solutions’ and services to governments. The interjection of for-profit actors such as surveillance companies offering what they present as easy technological solutions into immigration enforcement mechanisms is inherently dangerous.
Cellebrite, a surveillance firm marketing itself as the “global leader in digital intelligence”. In 2019, the company was promoting its digital extraction devices to a new target: authorities interrogating people seeking asylum.
US Immigration and Customs Enforcement (ICE), the agency at the centre of the deployment of President Trump’s “zero tolerance” approach to immigration enforcement and family separation, has for years been contracting a US surveillance company to intercept peoples’ communications across the United States.
Millions of people being forced to migrate because of war, persecution, and climate change is a call for urgent political and social action, not a business opportunity. These companies are installing and often operating immigration and border surveillance systems with no consideration of the protections of the rights of migrants.
But technology at the border seems to be a good business for them.
- Read our submission to the ‘UN Working Group on the use of mercenaries’ on the role of private companies in immigration and border management and the impact on the rights of migrants
To respond to migration flows, governments worldwide have prioritised an approach that criminalises the act of migration and focuses on security.
Borders are not only those we can see: we are witnessing an increasing externalisation of migration controls with the transfer of border management to third countries and digital borders, like digital portals and databases. Technological developments, such as the ones mentioned above, turn immigration enforcement borders invisible.
This list is not exhaustive - and it will soon become obsolete.
We will keep fighting for governments and public authorities to stop using invasive techniques for immigration control and for companies to stop supplying invasive techniques that curtail the rights of migrants.