Search
Content type: Advocacy
Why the EU AI Act fails migration
The EU AI Act seeks to provide a regulatory framework for the development and use of the most ‘risky’ AI within the European Union. The legislation outlines prohibitions for ‘unacceptable’ uses of AI, and sets out a framework of technical, oversight and accountability requirements for ‘high-risk’ AI when deployed or placed on the EU market.
Whilst the AI Act takes positive steps in other areas, the legislation is weak and even enables dangerous systems in the…
Content type: Advocacy
Generative AI models are based on indiscriminate and potentially harmful data scrapingExisting and emergent practices of web-scraping for AI is rife with problems. We are not convinced it stands up to the scrutiny and standards expected by existing law. If the balance is got wrong here, then people stand to have their right to privacy further violated by new technologies.The approach taken by the ICO towards web scraping for generative AI models may therefore have important downstream…
Content type: Advocacy
Our submission focussed on the evolving impacts of (i) automated decision-making, (ii) the digitisation of social protection programmes, (iii) sensitive data-processing and (iv) assistive technologies in the experiences and rights of people with disabilities.
We called on the OHCHR to:
Examine the impact that growing digitisation and the use of new and emerging technologies across sectors has upon the rights of persons with disabilities;
Urge states to ensure that the deployment of digital…
Content type: Advocacy
We submitted a report to the Commission of Jurists on the Brazilian Artificial Intelligence Bill focussed on highlighting the potential harms associated with the use of AI within schools and the additional safeguards and precautions that should be taken when implementing AI in educational technology.
The use of AI in education technology and schools has the potential to interfere with the child’s right to education and the right to privacy which are upheld by international human rights…
Content type: Long Read
On 12 April 2020, citing confidential documents, the Guardian reported Palantir would be involved in a Covid-19 data project which "includes large volumes of data pertaining to individuals, including protected health information, Covid-19 test results, the contents of people’s calls to the NHS health advice line 111 and clinical information about those in intensive care".
It cited a Whitehall source "alarmed at the “unprecedented” amounts of confidential health information being swept up in the…
Content type: Advocacy
On November 1, 2019, we submitted evidence to an inquiry carried out by the Scottish Parliament into the use of Facial Recognition Technology (FRT) for policing purposes.
In our submissions, we noted that the rapid advances in the field of artificial intelligence and machine learning, and the deployment of new technologies that seek to analyse, identify, profile and predict, by police, have and will continue to have a seismic impact on the way society is policed.
The implications come not…
Content type: Advocacy
During its 98th session, from 23 April to 10 May 2019, the UN Committee on the Elimination of Racial Discrimination (CERD) initiated the drafting process of general recommendation n° 36 on preventing and combatting racial profiling.
As part of this process, CERD invited stakeholders, including States, UN and regional human rights mechanisms, UN organisations or specialised agencies, National Human Rights Institutions, Non-Governmental Organisations (NGOs), research…
Content type: Advocacy
The feedback in this document was submitted as part of an open Request for Information (RFI) process regarding the document created by The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems ("The IEEE Global Initiative") titled, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems.
Content type: Advocacy
Privacy International's response to the inquiry by the House of Lords Select Committee on Artificial Intelligence.