Search
Content type: Advocacy
Why the EU AI Act fails migration
The EU AI Act seeks to provide a regulatory framework for the development and use of the most ‘risky’ AI within the European Union. The legislation outlines prohibitions for ‘unacceptable’ uses of AI, and sets out a framework of technical, oversight and accountability requirements for ‘high-risk’ AI when deployed or placed on the EU market.
Whilst the AI Act takes positive steps in other areas, the legislation is weak and even enables dangerous systems in the…
Content type: Advocacy
Generative AI models are based on indiscriminate and potentially harmful data scrapingExisting and emergent practices of web-scraping for AI is rife with problems. We are not convinced it stands up to the scrutiny and standards expected by existing law. If the balance is got wrong here, then people stand to have their right to privacy further violated by new technologies.The approach taken by the ICO towards web scraping for generative AI models may therefore have important downstream…
Content type: Press release
9 November 2023 - Privacy International (PI) has just published new research into UK Members of Parliament’s (startling lack of) knowledge on the use of Facial Recognition Technology (FRT) in public spaces, even within their own constituencies. Read the research published here in full: "MPs Asleep at the Wheel as Facial Recognition Technology Spells The End of Privacy in Public".PI has recently conducted a survey of 114 UK MPs through YouGov. Published this morning, the results are seriously…
Content type: Advocacy
Our submission focussed on the evolving impacts of (i) automated decision-making, (ii) the digitisation of social protection programmes, (iii) sensitive data-processing and (iv) assistive technologies in the experiences and rights of people with disabilities.
We called on the OHCHR to:
Examine the impact that growing digitisation and the use of new and emerging technologies across sectors has upon the rights of persons with disabilities;
Urge states to ensure that the deployment of digital…
Content type: Advocacy
We submitted a report to the Commission of Jurists on the Brazilian Artificial Intelligence Bill focussed on highlighting the potential harms associated with the use of AI within schools and the additional safeguards and precautions that should be taken when implementing AI in educational technology.
The use of AI in education technology and schools has the potential to interfere with the child’s right to education and the right to privacy which are upheld by international human rights…
Content type: News & Analysis
What if we told you that every photo of you, your family, and your friends posted on your social media or even your blog could be copied and saved indefinitely in a database with billions of images of other people, by a company you've never heard of? And what if we told you that this mass surveillance database was pitched to law enforcement and private companies across the world?
This is more or less the business model and aspiration of Clearview AI, a company that only received worldwide…
Content type: News & Analysis
Last month, the World Health Organization published its guidance on Ethics and Governance of Artificial Intelligence for Health. Privacy International was one of the organisations that was tasked with reviewing the report. We want to start by acknowledging that this report is a very thorough one that does not shy away from acknowledging the risks and limitations of the use of AI in healthcare. As it is often the case with guidance notes of this kind, its effectiveness will depend on the…
Content type: Examples
France has been testing AI tools with security cameras supplied by the French technology company Datakalab in the Paris Metro system and buses in Cannes to detect the percentage of passengers who are wearing face masks. The system does not store or disseminate images and is intended to help authorities anticipate future oubreaks.
https://www.theguardian.com/world/2020/jun/18/coronavirus-mass-surveillance-could-be-here-to-stay-tracking
Writer: Oliver Holmes, Justin McCurry, and Michael Safi…
Content type: Examples
A growing number of companies - for example, San Mateo start-up Camio and AI startup Actuate, which uses machine learning to identify objects and events in surveillance footage - are repositioning themselves as providers of AI software that can track workplace compliance with covid safety rules such as social distancing and wearing masks. Amazon developed its own social distancing tracking technology for internal use in its warehouses and other buildings, and is offering it as a free tool to…
Content type: Examples
After governments in many parts of the world began mandating wearing masks when out in public, researchers in China and the US published datasets of images of masked faces scraped from social media sites to use as training data for AI facial recognition models. Researchers from the startup Workaround, who published the COVID19 Mask image Dataset to Github in April 2020 claimed the images were not private because they were posted on Instagram and therefore permission from the posters was not…
Content type: Examples
Researchers are scraping social media posts for images of mask-covered faces to use to improve facial recognition algorithms. In April, researchers published to Github the COVID19 Mask Image Dataset, which contains more than 1,200 images taken from Instagram; in March, Wuhan researchers compiled the Real World Masked Face Dataset, a database of more than 5,000 photos of 525 people they found online. The researchers have justified the appropriation by saying images posted to Instagram are public…
Content type: Examples
Many of the steps suggested in a draft programme for China-style mass surveillance in the US are being promoted and implemented as part of the government’s response to the pandemic, perhaps due to the overlap of membership between the National Security Commission on Artificial Intelligence, the body that drafted the programme, and the advisory task forces charged with guiding the government’s plans to reopen the economy. The draft, obtained by EPIC in a FOIA request, is aimed at ensuring that…
Content type: Long Read
Over the last two decades we have seen an array of digital technologies being deployed in the context of border controls and immigration enforcement, with surveillance practices and data-driven immigration policies routinely leading to discriminatory treatment of people and undermining peoples’ dignity.
And yet this is happening with little public scrutiny, often in a regulatory or legal void and without understanding and consideration to the impact on migrant communities at the border and…
Content type: Long Read
What Do We Know?
In late March, the NHS quietly announced that it would give technology businesses access to unprecedented quantities of patient data for processing and analysis in response to COVID-19. One of those businesses is CIA-backed Palantir Technologies. Palantir’s software is allegedly “mission critical” to US Immigration and Customs Enforcement’s (ICE) mass raids, detentions, and deportations. Despite trusting Palantir with patient data, the NHS has been tight-lipped about the scope…
Content type: Long Read
In April 2018, Amazon acquired “Ring”, a smart security device company best known for its video doorbell, which allows Ring users to see, talk to, and record people who come to their doorsteps.
What started out as a company pitch on Shark Tank in 2013, led to the $839 million deal, which has been crucial for Amazon to expand on their concept of the XXI century smart home. It’s not just about convenience anymore, interconnected sensors and algorithms promise protection and provide a feeling of…
Content type: Examples
The AI firm Faculty, which worked on the Vote Leave campaign, was given a £400,000 UK government contract to analyse social media data, utility bills, and credit ratings, as well as government data, to help in the fight against the coronavirus. This is at least the ninth contract awarded to Faculty since 2018, for a total of at least £1.6 million. No other firm was asked to bid on the contract, as normal public bodies’ requirements for competitive procurement have been waived in the interests…
Content type: News & Analysis
Yesterday, Amazon announced that they will be putting a one-year suspension on sales of its facial recognition software Rekognition to law enforcement. While Amazon’s move should be welcomed as a step towards sanctioning company opportunism at the expense of our fundamental freedoms, there is still a lot to be done.
The announcement speaks of just a one-year ban. What is Amazon exactly expecting to change within that one year? Is one year enough to make the technology to not discriminate…
Content type: Long Read
On 12 April 2020, citing confidential documents, the Guardian reported Palantir would be involved in a Covid-19 data project which "includes large volumes of data pertaining to individuals, including protected health information, Covid-19 test results, the contents of people’s calls to the NHS health advice line 111 and clinical information about those in intensive care".
It cited a Whitehall source "alarmed at the “unprecedented” amounts of confidential health information being swept up in the…
Content type: Press release
Photo by Ashkan Forouzani on Unsplash
Today Privacy International, Big Brother Watch, medConfidential, Foxglove, and Open Rights Group have sent Palantir 10 questions about their work with the UK’s National Health Service (NHS) during the Covid-19 public health crisis and have requested for the contract to be disclosed.
On its website Palantir says that the company has a “culture of open and critical discussion around the implications of [their] technology” but the company have so far…
Content type: News & Analysis
In mid-2019, MI5 admitted, during a case brought by Liberty, that personal data was being held in “ungoverned spaces”. Much about these ‘ungoverned spaces’, and how they would effectively be “governed” in the future, remained unclear. At the moment, they are understood to be a ‘technical environment’ where personal data of unknown numbers of individuals was being ‘handled’. The use of ‘technical environment’ suggests something more than simply a compilation of a few datasets or databases.
The…
Content type: Advocacy
On November 1, 2019, we submitted evidence to an inquiry carried out by the Scottish Parliament into the use of Facial Recognition Technology (FRT) for policing purposes.
In our submissions, we noted that the rapid advances in the field of artificial intelligence and machine learning, and the deployment of new technologies that seek to analyse, identify, profile and predict, by police, have and will continue to have a seismic impact on the way society is policed.
The implications come not…
Content type: Examples
The US Department of Homeland Security awarded a $113 million contract to General Dynamics to carry out the Visa Lifecycle Vetting Initiative (VLVI), a renamed version of the Extreme Vetting Initiative and part of a larger effort called the National Vetting Enterprise. In May 2018, public outrage led the DHS to back away from a machine learning system that would monitor immigrants continuously; however, the reason it gave was that the technology to automate vetting did not yet exist. These…
Content type: Examples
VeriPol, a system developed at the UK's Cardiff University, analyses the wording of victim statements in order to help police identify fake reports. By January 2019, VeriPol was in use by Spanish police, who said it helped them identify 64 false reports in one week and was successful in more than 80% of cases. The basic claim is that AI can find patterns that are common to false statements; among the giveaways experts say that false statements are likely to be shorter than genuine ones, focus…
Content type: Examples
In October 2018, the Singapore-based startup LenddoEFL was one of a group of microfinance startups aimed at the developing world that used non-traditional types of data such as behavioural traits and smartphone habits for credit scoring. Lenddo's algorithm uses numerous data points, including the number of words a person uses in email subject lines, the percentage of photos in a smartphone's library that were taken with a front-facing camera, and whether they regularly use financial apps on…
Content type: Examples
In November 2018, tests began of the €4.5 million iBorderCtrl project, which saw AI-powered lie detectors installed at airports in Hungary, Latvia, and Greece to question passengers travelling from outside the EU. The AI questioner was set to ask each passenger to confirm their name, age, and date of birth, and then query them about the purpose of their trip and who is paying for it. If the AI believes the person is lying, it is designed to change its tone of voice to become "more skeptical"…
Content type: Examples
In November 2018, worried American parents wishing to check out prospective babysitters and dissatisfied with criminal background checks began paying $24.99 for a scan from the online service Predictim, which claimed to use "advanced artificial intelligence" to offer an automated risk rating. Predictim based its scores in part on Facebook, Twitter, and Instagram posts - applicants were required to share broad access to their accounts - and offered no explanation of how it reached its risk…
Content type: Examples
In November 2018, researchers at Sweden's University of Lund, the US's Worcester Polytechnic Institute, and the UK's Oxford University announced that in August the US State Department had begun using a software program they had designed that uses AI to find the best match for a refugee's needs, including access to jobs, medical facilities, schools, and nearby migrants who speak the same language. Known as "Annie MOORE", refugees matched by the program were finding jobs within 90 days about a…
Content type: Advocacy
During its 98th session, from 23 April to 10 May 2019, the UN Committee on the Elimination of Racial Discrimination (CERD) initiated the drafting process of general recommendation n° 36 on preventing and combatting racial profiling.
As part of this process, CERD invited stakeholders, including States, UN and regional human rights mechanisms, UN organisations or specialised agencies, National Human Rights Institutions, Non-Governmental Organisations (NGOs), research…
Content type: Examples
In 2017, US Immigration & Customs Enforcement (ICE) announced that it would seek to use artificial intelligence to automatically evaluate the probability of a prospective immigrant “becoming a positively contributing member of society.” In a letter to acting Department of Homeland Security Secretary Elaine Duke, a group of 54 concerned computer scientists, engineers, mathematicians, and researchers objected to ICE’s proposal and demanded that ICE abandon this approach because it would be…