Search
Content type: Advocacy
In our submission, we argue that the EDPB's opinion must take a firm approach to prevent peoples' rights being undermined by AI. We focus on the following issues in particular: The fundamentally general nature of AI models creates problems for the legitimate interest test;The risks of an overly permissive approach to the legitimate interests test;Web scraping as ‘invisible processing’ and the consequent need for transparency;Innovative technology and people’s fundamental rights;The (in)…
Content type: Examples
Almost half of all job seekers are using AI tools such as ChatGPT and Gemini to help them write CVs and cover letters and complete assessments, flooding employers and recruiters with applications in an already-tight market. Managers have said they can spot giveaways that applicants used AI, such as US-style grammar and bland, impersonal language. Those whose AI-padded applications perform best are those who have paid for ChatGPT - overwhelmingly from higher socio-economic backgrounds, male, and…
Content type: Advocacy
In the wake of Privacy International’s (PI) campaign against the unfettered use of Facial Recognition Technology in the UK, MPs gave inadequate responses to concerns raised by members of the public about the roll-out of this pernicious mass-surveillance technology in public spaces. Their responses also sidestep calls on them to take action.The UK is sleepwalking towards the end of privacy in public. The spread of insidious Facial Recognition Technology (FRT) in public spaces across the country…
Content type: Long Read
IntroductionIn early October this year, Google announced its AI Overviews would now have ads. AI companies have been exploring ways to monetise their AI tools to compensate for their eye watering costs, and advertising seems to be a part of many of these plans. Microsoft have even rolled out an entire Advertising API for its AI chat tools.As AI becomes a focal point of consumer tech, the next host of the AdTech expansion regime could well be the most popular of these AI tools: AI chatbots.…
Content type: Examples
The UK's Department of Education intends to appoint a project team to test edtech against set criteria to choose the highest-quality and most useful products. Extra training will be offered to help teachers develop enhanced skills. Critics suggest it would be better to run a consultation first to work out what schools and teachers want.Link to article Publication: Schools WeekWriter: Lucas Cumiskey
Content type: Examples
The UK's new Labour government are giving AI models special access to the Department of Education's bank of resources in order to encourage technology companies to create better AI tools to reduce teachers' workloads. A competition for the best ideas will award an additional £1 million in development funds. Link to article Publication: GuardianWriter: Richard Adams
Content type: Examples
The Utah State Board of Education has approved a $3 million contract with Utah-based AEGIX Global that will let K-12 schools in the state apply for funding for AI gun detection software from ZeroEyes for up to four cameras per school. The software will work with the schools' existing camera systems, and notifies police when the detection of a firearm is verified at the ZeroEyes control centre. The legislature will consider additional funding if the early implementation is successful. The…
Content type: Long Read
The fourth edition of PI’s Guide to International Law and Surveillance provides the most hard-hitting past and recent results on international human rights law that reinforce the core human rights principles and standards on surveillance. We hope that it will continue helping researchers, activists, journalists, policymakers, and anyone else working on these issues.The new edition includes, among others, entries on (extra)territorial jurisdiction in surveillance, surveillance of public…
Content type: Explainer
Behind every machine is a human person who makes the cogs in that machine turn - there's the developer who builds (codes) the machine, the human evaluators who assess the basic machine's performance, even the people who build the physical parts for the machine. In the case of large language models (LLMs) powering your AI systems, this 'human person' is the invisible data labellers from all over the world who are manually annotating datasets that train the machine to recognise what is the colour…
Content type: Explainer
IntroductionThe emergence of large language models (LLMs) in late 2022 has changed people’s understanding of, and interaction with, artificial intelligence (AI). New tools and products that use, or claim to use, AI can be found for almost every purpose – they can write you a novel, pretend to be your girlfriend, help you brush your teeth, take down criminals or predict the future. But LLMs and other similar forms of generative AI create risks – not just big theoretical existential ones – but…
Content type: Advocacy
Generative AI models cannot rely on untested technology to uphold people's rightsThe development of generative AI has been dependent on secretive scraping and processing of publicly available data, including personal data. However, AI companies have to date had an unacceptably poor approach towards transparency and have sought to rely on unproven ways to fulfill people's rights, such as to access, rectify, and request deletion of their dataOur view is that the ICO should adopt a stronger…
Content type: News & Analysis
Is the AI hype fading? Consumer products with AI assistant are disappointing across the board, Tech CEOs are struggling to give examples of use cases to justify spending billions into Graphics Processing Units (GPUs) and models training. Meanwhile, data protection concerns are still a far cry from having been addressed.
Yet, the believers remain. OpenAI's presentation of ChatGPT was reminiscent of the movie Her (with Scarlett Johannsen's voice even being replicated a la the movie), Google…
Content type: Advocacy
Privacy International (PI) welcomes the opportunity to provide input to the forthcoming report the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related tolerance to the 56th session of Human Rights Council which will examine and analyse the relationship between artificial intelligence (AI) and non-discrimination and racial equality, as well as other international human rights standards.AI applications are becoming a part of everyday life:…
Content type: Advocacy
AI-powered employment practices: PI's response to the ICO's draft recruitment and selection guidance
The volume of data collected and the methods to automate recruitment with AI poses challenges for the privacy and data protection rights of candidates going through the recruitment process.Recruitment is a complex and multi-layered process, and so is the AI technology intended to service this process at one or all stages of it. For instance, an AI-powered CV-screening tool using natural language processing (NLP) methods might collect keyword data on candidates, while an AI-powered video…
Content type: Advocacy
Why the EU AI Act fails migration
The EU AI Act seeks to provide a regulatory framework for the development and use of the most ‘risky’ AI within the European Union. The legislation outlines prohibitions for ‘unacceptable’ uses of AI, and sets out a framework of technical, oversight and accountability requirements for ‘high-risk’ AI when deployed or placed on the EU market.
Whilst the AI Act takes positive steps in other areas, the legislation is weak and even enables dangerous systems in the…
Content type: Advocacy
Generative AI models are based on indiscriminate and potentially harmful data scrapingExisting and emergent practices of web-scraping for AI is rife with problems. We are not convinced it stands up to the scrutiny and standards expected by existing law. If the balance is got wrong here, then people stand to have their right to privacy further violated by new technologies.The approach taken by the ICO towards web scraping for generative AI models may therefore have important downstream…
Content type: Examples
The US Department of Homeland Security awarded a $113 million contract to General Dynamics to carry out the Visa Lifecycle Vetting Initiative (VLVI), a renamed version of the Extreme Vetting Initiative and part of a larger effort called the National Vetting Enterprise. In May 2018, public outrage led the DHS to back away from a machine learning system that would monitor immigrants continuously; however, the reason it gave was that the technology to automate vetting did not yet exist. These…
Content type: Examples
VeriPol, a system developed at the UK's Cardiff University, analyses the wording of victim statements in order to help police identify fake reports. By January 2019, VeriPol was in use by Spanish police, who said it helped them identify 64 false reports in one week and was successful in more than 80% of cases. The basic claim is that AI can find patterns that are common to false statements; among the giveaways experts say that false statements are likely to be shorter than genuine ones, focus…
Content type: Examples
In October 2018, the Singapore-based startup LenddoEFL was one of a group of microfinance startups aimed at the developing world that used non-traditional types of data such as behavioural traits and smartphone habits for credit scoring. Lenddo's algorithm uses numerous data points, including the number of words a person uses in email subject lines, the percentage of photos in a smartphone's library that were taken with a front-facing camera, and whether they regularly use financial apps on…
Content type: Examples
In November 2018, tests began of the €4.5 million iBorderCtrl project, which saw AI-powered lie detectors installed at airports in Hungary, Latvia, and Greece to question passengers travelling from outside the EU. The AI questioner was set to ask each passenger to confirm their name, age, and date of birth, and then query them about the purpose of their trip and who is paying for it. If the AI believes the person is lying, it is designed to change its tone of voice to become "more skeptical"…
Content type: Examples
In November 2018, worried American parents wishing to check out prospective babysitters and dissatisfied with criminal background checks began paying $24.99 for a scan from the online service Predictim, which claimed to use "advanced artificial intelligence" to offer an automated risk rating. Predictim based its scores in part on Facebook, Twitter, and Instagram posts - applicants were required to share broad access to their accounts - and offered no explanation of how it reached its risk…
Content type: Examples
In November 2018, researchers at Sweden's University of Lund, the US's Worcester Polytechnic Institute, and the UK's Oxford University announced that in August the US State Department had begun using a software program they had designed that uses AI to find the best match for a refugee's needs, including access to jobs, medical facilities, schools, and nearby migrants who speak the same language. Known as "Annie MOORE", refugees matched by the program were finding jobs within 90 days about a…
Content type: Advocacy
During its 98th session, from 23 April to 10 May 2019, the UN Committee on the Elimination of Racial Discrimination (CERD) initiated the drafting process of general recommendation n° 36 on preventing and combatting racial profiling.
As part of this process, CERD invited stakeholders, including States, UN and regional human rights mechanisms, UN organisations or specialised agencies, National Human Rights Institutions, Non-Governmental Organisations (NGOs), research…
Content type: Examples
In 2017, US Immigration & Customs Enforcement (ICE) announced that it would seek to use artificial intelligence to automatically evaluate the probability of a prospective immigrant “becoming a positively contributing member of society.” In a letter to acting Department of Homeland Security Secretary Elaine Duke, a group of 54 concerned computer scientists, engineers, mathematicians, and researchers objected to ICE’s proposal and demanded that ICE abandon this approach because it would be…