We read all the submissions sent to the UN Special Rapporteur on digital technology, social protection and human rights – here is what they have to say

News & Analysis
submissions UN special rapporteur extreme poverty

Photo by Sharon McCutcheon on Unsplash

 

In May, the United Nations Special Rapporteur on extreme poverty and human rights, Philip Alston invited all interested governments, civil society organisations, academics, international organisations, activists, corporations and others, to provide written input for his thematic report on the human rights impacts, especially on those living in poverty, of the introduction of digital technologies in the implementation of national social protection systems. The report will be presented to the General Assembly in New York in October 2019.

As well as presenting our own submission, Privacy International analysed all the other publicly available contributions in order to map the main concerns expressed about how social protection programmes across the world increasingly integrate digital, automated and biometric technologies. Here is what we found.

 

Submissions at a Glance:

 

  • Total number of submissions[1]: 56

  • Total number of submissions from regional/international orgs and think tanks (including PI): 3

  • Total number of submissions from government bodies in Africa, Asia, Latin America and the Middle East: 11

  • Total number of submissions from government bodies in in Europe, North America and Australia/New Zealand: 6

  • Total number of submissions from organisations and think tanks/universities located in Africa, Asia, Latin America and the Middle East: 11

  • Total Number of Submissions from organisations, think tanks/universities in Europe, North America and Australia/New Zealand: 25

  • Of which from/about Australia/New Zealand: 8

 

What were the main concerns raised?

 

          1. Discrimination and Exclusion

 

Perhaps the most commonly cited human rights concern across all the submissions was the potential capacity for digital and automated social protection systems to profile and potentially punish benefits claimants. From Cashless Debit Cards to scoring systems, advocates worldwide raised concerns about social and racial profiling through data collection as well as determining eligibility and policing of benefit claimants through automated and data-driven processes.  Automated decision-making systems were also described by many submitters as punitive to people less familiar with the national system - like refugees and migrants - or those who are less digitally literate

 

The submissions presented a wide variety of case studies that looked at how governments are using integrated digital technology systems to target particular groups in need of social assistance. One of the key frameworks they used for understanding automated and digitized social protection systems was  ‘Conditionality-by-Design’, implying that some are in fact surveillance systems designed to extract information from claimants in exchange for social assistance and protection.

“For example, Chile’s biometric identification system that was recently implemented to administrate the school meals programme, requires children to provide their fingerprints in order to obtain the meal. Although the use of biometrics must be approved by a parent or guardian, what is more alarming is that school children will only receive a free school meal on the condition that their parents consent to giving away this data.”

Other case studies presented in the submissions focussed on how systems of assistance for asylum seekers become surveillance mechanisms deployed by the government to monitor people from overseas by tracking their expenses and movements.

“In the UK, for example, automation was introduced into the new digital Universal Credit benefit system, in order to facilitate registration and eligibility. In practice, this has meant that people have been cut off from vital forms of social assistance because of flaws in the systems design and lack of claimants experience.”

 

 

      2. Digital Access

 

Access issues - including digital literacy, internet connectivity and ownership of devices like smartphones - were some of the most commonly raised problems with the roll out of digital technologies in social protection systems across the world. These issues also intersect with concerns around discrimination. Poorly designed and ‘digital by default’ systems that are difficult to use can increase the chances of accidental non-compliance and accusations of fraud. Many submissions also discussed internet connectivity in the context of ‘digital by default’ systems and the inconvenience this causes claimants who lack a home internet connection or do not own a mobile device, as well as additional costs in data plans.

 

Many submitters also raised concerns about how digital systems are being designed, and how this relates to welfare and social protection systems more broadly. Submitters expressed concerns around how differently-abled claimants could access online portals and cope with the shift in the burden of proof away from the government towards the individual. Various submissions cited the example of Singapore, which has a digitized system for migrant workers:

Economic migrants are issued with a credit-card size work permit, which also serves as an ID and stores other important information accessible via a QR code on the reverse of the work permit, and a mobile phone app is available which can read the QR code. This process is described by submitters as cumbersome and not all workers have a smartphone which leads to workers accidentally overstaying their visas.

 

          3. Privacy, Data Protection and Security

 

Privacy, data protection and security concerns play an increasingly significant role with the expansion of tech integrated social protection systems. Many submitters shared privacy concerns over the increasingly invasive data gathering about citizens and people from overseas and also advocated for standardising privacy regulations across all systems. Submission writers examined the relationship between privacy, security and protection and also highlighted that although privacy can be seen as a solution to the problems of profiling and exclusion it should also be a starting principle for the way social protection systems are designed.

 

Many submitters that raised concerns about privacy, data protection and security cited the Aadhaar project in India as an example of an overly invasive biometric identification and data collection system. They also flagged that the requirements of the Aadhaar project increase the risk of unnecessary surveillance and disproportionately interfere with the right to privacy of minorities.

 

      4. Transparency and Accountability

 

Advocates argued in the submissions that people should have meaningful access and be able to understand how decisions are made about their social assistance and benefits. If automated systems are gathering data and cross-referencing from multiple databases in order to determine eligibility or fraud, people should have knowledge of the way the system flags up and responds to errors. Submission writers therefore pointed out that algorithms used in software for automated decision-making are seldom subjected to transparency regulations and access to the algorithms source code is often impossible or not permitted. Many submission writers argued that algorithm transparency should be regulated through policy and that legal guarantees to counteract discrimination and provide legal protection should be put in place.

Submission writers flagged the roll out of the Cashless Debit Card (CDC) in Australia as an example of lack of transparency:

“The programme, which targets areas with large indigenous Australian communities, collects data from claimants, but provides no clarity or explanation as to the type of information being gathered and who it is being shared with.”

Others still stated that the implementation of digital technologies in social protection systems should be subject to full transparency, including an easily accessible list of where and for what purpose new technology systems are being implemented. With regards to accountability, others argue, lack of a clear focal point for appeal once a system is automated can cause confusion and distress to claimants.

 

         5. Corporate-State Partnerships:

 

Governments seeking to innovate and automate social protection systems are increasingly contracting data systems from private companies. This reliance by many governments on the private sector to build systems for implementing social security policy has data protection concerns but also invites corporations to take a stake in the personal lives of citizens. Advocates raised concerns about the implications of these partnerships for transparency, as well as concerns about the way government might become ‘locked-in’ and reliant on external expertise. Submitters also pointed to the many problems that can arise when a private company remains in possession of sensitive information belonging to citizens.

 

In the past, this has had serious repercussions for the wellbeing and privacy of claimants. Submitters gave many examples of the potential problems caused by fraught collaborations between the private sector and state entities to administrate social protection. One example given is the partnership the South African Social Security Agency (SASSA) and Mastercard, Net1 and Grinrod Bank, New Zealand and Australia in 2012, that was eventually declared unlawful.

“The programme gathered biometric data SASSA did not have the supporting IT infrastructure nor the card readers to access. CPS, Net1 and Grindrod Bank collected and stored biometric data from grant beneficiaries on behalf of SASSA. However, this role was abused in order to advance sales of financial products and services by Net1 subsidiaries.”

 

What is Privacy International Doing?

 

PI is contributing to the work of the UN Special Rapporteur on Extreme Poverty to highlight the importance of the right to privacy and the impact of data exploitation on the delivery of social services across the world.

 

In particular, PI is working to expose and document the suppliers of tech solutions in state-run social protection programmes in order to shed light on the dual nature of the surveillance of benefits claimants, one where the private sector promotes the development of data-intensive welfare programmes for their own ends. Read more here.

 

 

What’s next?

 

Many other submitters reflected the areas of concern and the corresponding recommendations proposed by PI and we hope that the report the UNSR will present at the General Assembly in October 2019 will reflect these issues.

 

It is an important opportunity to remind Members States of their obligations to progressively realise social, economic and cultural rights as well as to uphold the right of privacy. We expect his report to clearly articulate concrete demands for Member States and companies, and other stakeholders to review their practices and policies to ensure the protection of the fundamental rights of all but in particular those living in extreme poverty, with a view to advancing the eradication of such poverty.

 

 

This analysis was undertaken by Grace Tillyard, a doctoral researcher in the Media, Communications and Cultural Studies department at Goldsmiths College, London. Grace is on a placement at Privacy International as part of placement scheme supported by the Consortium for the Humanities and the Arts for South-East England (CHASE).

 

[1] At the time of writing.