Digital Health: what does it mean for your rights and freedoms

Governments have been digitising their health systems and, more broadly, healthcare. We dive into the right to health situated in the digital context, exploring the digital health initiatives that put patients' data and freedoms at risk.

Long Read

Table of contents

Introduction

Governments around the world are increasingly embracing innovations in digital technology to enhance the delivery of healthcare. Integral to these innovations is data. Healthcare providers, researchers and insurers all rely on the collection and analysis of information about people’s symptoms, ailments, lifestyle and treatment to improve their understanding and the services they can provide. 

The ever-increasing capacity to capture, store, process and analyse data means there is now more health-related data available, and more reliance on it, than could have been imagined in the past. Of course, health technology is no new phenomenon; the use of ICT in the health sector goes back to at least the 1990’s. However, in the last decade there has been an exponential growth of governments digitising their health systems and, more broadly, healthcare.

Long Read

In this piece we outline the main discussions and measures we need to see being systematically adopted to inform decision-making about digital solutions in the health sector, and provide examples of where these were not integrated in decision-making processes and with what consequences.

Weighing the (potential) benefits with the risks

Developments in digital technologies can contribute towards improving healthcare and realising the right to health, but they also raise concerns about privacy, security and data protection, and more widely about dignity, non-discrimination, and equality. In this evolving data intensive landscape, governments and industry may find opportunities in the health sector to exercise power over individuals through surveillance, exploitation, profiteering, market domination, and control. 

PI has documented several such efforts by private companies to obtain and monetise health data, such as menstruation appsmental health websitescompanies selling diet programmes, and responses during the Covid-19 pandemic. At a time when health data is increasingly attributed to commercial value, the digital systems that collect and process this data should be scrutinised from a rights-based perspective. Without careful consideration of the impact and the risks, the promised benefits of innovation may end up creating more harm than good.

Here we provide an overview of the right to health and its place in the digitisation of the healthcare sector. We begin with an outline of the key principles of the right to health and how to uphold a human rights-based approach to digital healthcare. We then examine the digitisation of the health sector and the risks it poses to the right to privacy, data protection and freedom from discrimination.

Privacy rights and the right to health

The right to health

Everyone has the right to the enjoyment of the highest attainable standard of physical and mental health. This is enshrined in several international treaties, first in the 1946 Constitution of the World Health Organisation (WHO) and further included in Article 25 of the Universal Declaration of Human Rights; Article 12 of the International Covenant on Economic, Social and Cultural Rights; Article 24 of the Convention on the Rights of the Child; Article 12 of the Convention on the Elimination of All Forms of Discrimination Against Women; and Article 25 of the Convention on the Rights of Persons with Disabilities. 

The right to health has four “interrelated and essential elements” which provide a framework through which to understand and navigate how all healthcare provisions – including digital ones – ought to be designed and implemented. The Committee on Economic and Social Cultural Rights ‘General Comment 14’ outlines the core components of the right to health:

  • Availability: healthcare provision must be in sufficient quantity. 
  • Accessibility: healthcare provision must be physically accessible and affordable. They must also be non-discriminatory. Information must also be accessible, without compromising data protection. 
  • Acceptability: healthcare provision must respect medical ethics, confidentiality and cultural sensitivities. 
  • Quality: healthcare provision must be scientifically and medically appropriate. It must be safe, effective, people-centred, timely, equitable, integrated and efficient. 

The Office of the UN High Commissioner for Human Rights and WHO also provide a list of core principles underpinning the right to health, including: 

  • Freedom: such as to be free from non-consensual medical treatment or forced medical experiments.
  • Entitlements: equality of opportunity for everyone to enjoy the highest attainable standard of health; equal and timely access; provisions of information; and participation in health-related decision-making processes.
  • Equality and non-discrimination: health services, goods and facilities must be provided to all without any discrimination.
  • Participation and inclusion: the population(s) affected must be able to participate in health-related decision-making processes.
  • Accountability: Member States must be able to demonstrate the measures that they are taking to comply with their obligations, which includes establishing mechanisms for reporting and monitoring progress that must be accessible, transparent and effective, and that enable individuals to claim their rights.

In its Global Strategy on Digital Health (2020-2025), the WHO emphasised that digital health should be developed according to a set of principles including "transparency, accessibility, scalability, replicability, interoperability, privacy, security and confidentiality". These principles must inform decisions made around the development of health initiatives, particularly digital initiatives that pose risks for patients and challenges for governance, “including data privacy and sharing and ensuring safety and protection of individuals within the digital health environment”. Strong legal and regulatory bases must be established to protect the "privacy, confidentiality, integrity and availability of data and the processing of personal health data".

Under such international law and guidance, States must take steps to fully realise the right to health both through government action (such as the provision of services and adoption of legislation) and through regulating the private sector. 

Key Resources

Privacy has become all the more essential in the age of data exploitation. The way data and technology are now deployed means that our privacy is under increased threat  and on a scale that we couldn’t have imagined 20 years ago, outside of science fiction – the ways in which we can be tracked and identified have exploded, alongside the types and scale of information available about us.

Privacy, data-protection and health data

All patients are entitled to the fundamental right of privacy and non-discrimination in their access to and provision of healthcare. The vast processing of personal data in the provision of healthcare services poses sensitive risks to patients if their data is misused, shared without their informed consent, breached or otherwise exploited. Some jurisdictions have taken steps to recognise the special status of health data and categorise it as "sensitive personal data" or "special category of data", which affords it higher levels of protection. These higher protections may also extend to data which reveals sensitive health-related data, such as information that can be used to infer, derive or predict a health condition (e.g., someone's purchasing history). However, there is emerging debate about whether general lifestyle and wellness information should also be included.

Such privacy and data protection concerns have only been magnified in recent years due to the use of new technologies within the health sector. For example, AI-enabled systems could scale up the exploitation and sharing of personal data. 

As a result, several international reports and recommendations on data protection and health have emerged in recent years calling for the need to exercise care and caution when dealing with data processing in the health sector. These include: 

Report

This guide was designed for our partners across the world who want to see strong data protection laws in their countries. We identify some key points that they can use in their advocacy.

This version of the guide is the full guide that you can download as a single resource.

The right to health in the digital context

As the health sector moves toward digital technologies to advance healthcare solutions, it is important to ensure that these tools are developed and deployed in a way that empowers patients’ right to health and integrates appropriate safeguards against data exploitation and privacy abuses. 

Why the drive for digital: the benefits of digital health

There are a variety of reasons this “digital first” approach to healthcare is being promoted, despite the myriad risks they also carry. These technologies may offer new and efficient means to assist with medical diagnosis and to streamline services as well as hope of patient empowerment and inclusivity within healthcare services. However, the adoption of new tech in the delivery of healthcare should not be driven purely by 'techno-solutionism' (which sees technology as the solution to all problems facing our society) at a cost to fundamental rights, in particular the right to privacy and freedom from discrimination. 

Furthermore, technology is often hailed as an essential tool to achieve the 2030 Agenda for Sustainable Development, including SDG 3 to ensure healthy lives and promote well- being for all at all ages. The WHO Global Strategy on Digital Health 2020–2025 also emphasised digital technologies as an essential component and an enabler of sustainable health systems and universal health coverage. (The WHO maintains a repository of National Digital Health Strategies, which demonstrates at a glance the wide reach of digital health initiatives). 

Below are some of the main examples often put forward for how digital tools might assist the healthcare industry.

Improved access to healthcare

Telemedicine services, such as remote consultations or chat-based services, and (reliable) information being available online all have the potential to make healthcare easier to access than exclusively in-person appointments. Granted, remote services often depend on an adequate internet connection, this digital option nonetheless means that more information might be available to more people more often (and without replacing traditional alternatives).

Remote services in the form of online resources may also eliminate barriers to accessing healthcare, such as by enabling patients to anonymously  seek information on their symptoms online. Patients might also avoid potentially hostile gatekeepers to healthcare services for example, in situations where certain communities might be discriminated against for accessing healthcare. In such a case, they might find alternative modes of support through telemedicine or other online resources that are remotely accessible through the Internet or their phone.

Patient empowerment and remote monitoring

Digital technology can also empower patients by providing them greater control over when they see their doctor, what services they can rely on to do so, how doctors administer their medicine and the tools they can use to manage their treatment and health data. For example, wearables can measure important health data such as insulin level checks or heartbeat monitoring without having to travel to in-person appointments. 

Remote monitoring might also be particularly convenient for those with long-term conditions who often undergo regular checks and tests. Remote sensors on some devices can also even trigger automatic responses from healthcare providers to symptoms or events, such as watches that detect heart attacks.

Furthermore, health apps that can integrate with social media may also allow individuals to create and join online support communities. Such support groups can be of enormous value to people suffering from physically or mentally disabling conditions and provide new ways to share tips based on practical advice and lived experiences. 

Long Read

New research from our partners at the Centre for Internet & Society (CIS) reveals Indian health websites and apps are sharing intimate health-related data with third parties such as Facebook and Google. 

But these same digital solutions carry magnified risks…

The perceived potential of digital tools are not without their risks. PI has long been monitoring how digital healthcare systems and benefit systems around the world are becoming reliant on the collection and processing of vast amounts of personal data that is not always safely managed. Tech-powered decision-making models are data-intensive and often reliant on profiling and algorithmic decision-making. Unfortunately, some of the more ambitious digital proposals by companies and governments depend on untested expansions to these practices that are already problematic.

More (and more connected) data

The health sector has been facing increased pressure to provide better quality of care with finite resources. Under the banner of efficiency, the sector is inevitably turning to digital solutions to achieve these demands. Digitisation has changed the scale and scope of data use in healthcare, with the potential to enlarge:

  • the volume of data that can be stored; 
  • the number of ways data can be collected (e.g., wearables); 
  • our ability to analyse and process data; 
  • the ways in which different datasets can be brought together and consolidated and; 
  • the range of situations and circumstances in which datasets can be used (its interoperability). 

These capabilities can enable greater understandings of disease, treatments and trends in global public health. More (and more connected) data can also be useful for practitioners trying to work out what the best treatment for a patient might be and for researchers assessing healthcare interventions. However, these tools that enable such vast collection of sensitive health data simultaneously threaten, to a larger scale than ever, patient privacy and data protection in the inevitable situation that their data is misused, mishandled or otherwise exploited by the very digital solutions supposed to protect it.

Data leaks and breaches

For one, data systems like a centralised health data system store all the data on millions of patients into a single system that could be compromised by a single concerted online hacking attack. This compromises both patient safety and trust in digital systems. As more and more patient data is collected for existing and new health technologies, greater risks arise if that data is breached or even leaked (e.g., in Singaporethe U.S., Francethe UK and Brazil). More data enables more quality research on treatment for patients, but data is not inherently impenetrable; further awareness must be raised around the imperative of building safeguards to protect the data stored in digital systems. 

Data sharing without informed consent

Without the proper security and scrutiny, health-related data can also end up in the wrong hands. Large troves of data stored within a single system can more easily be shared by a data controller/processor to a third party without the consent of every single data subject. There are numerous examples of health-related data being soldextracted by law enforcement agenciesunlawfullysharedused for new/unexpected reasons, or unfairlycommoditised. PI uncovered numerous examples of governments storing telecommunications and app data under the guise of COVID-19 contact-tracing to collect location data, as well as user data being covertly exploited by digital health apps like pregnancy trackers. While someone may consent for their data to be used to support their own care, or even to conduct research for the public good, this does not mean that they consent for their data to then be used for other purposes or by other actors. 

Profiling and manipulation

PI has also documented several scenarios in which private companies have obtained and monetised health data, such as through menstruation appsBountymental health data, and companies selling diet programmesData-intensive health apps and exploitative data-sharing practices mean an increase in tracking and identification features to profile, target and manipulate. Increased powers to collect and process data means it is easier than ever to infer, derive and predict sensitive personal data, as well as to sell and distribute this data to third parties. 

Restrictions are needed on what health-related data can be processed for. Without the right safeguards, information about your health could be used by opaque automated algorithms to make life-changing decisions. For example, it might be used by insurers to wrongly deny access to care, or to decide who gets an organ transplant. Data protection safeguards must adapt to the sensitive and large-scale nature of health data collection and processing. Innovations in digital health solutions should not mean unchecked and unregulated data practices that threaten patients’ rights to privacy. 

Long Read

Digital health apps of all kinds are being used by people to better understand their bodies, their fertility, and to access health information. But there are concerns that the information people both knowingly and unknowing provide to the app, which can be very personal health information, can be exploited in unexpected ways.

Tools are not patient-centred

The promise of patient empowerment through digital tools also presupposes that these tools are designed with patients’ needs and rights at the centre. While these tools have the potential to empower patients, this is often deprioritised below the commercial desire to collect as many datapoints as possible on patients to sell to third parties like advertisers, as we found in our previous research. We must consequently hold digital health technologies, primarily those produced by big private sector actors, accountable for the risks they introduce through their efforts to “innovate” the sector. 

Furthermore, the definition of patient “empowerment” is still hotly debated, but simply providing patients with more control over their personal health data is an important first step.

Inaccuracies in the data

Aside from the legal and ethical concerns of using health-related data is whether data collected in non-clinical settings (e.g., through wearables or lifestyle apps) are even accurate and reliable. Concerns have been raised that the accuracy of data such as that collected by wearables is not reliable enough to use in healthcare settings, and diagnoses resulting from such arbitrary data collection risks more harm than empowerment for patients. For instance, inaccurate data could trigger inaccurate predictions or unequal health outcomes for underrepresented individuals assessed by data-powered algorithms.

Criminalisation and surveillance of marginalised communities

Law enforcement agencies and courts have also found new ways of using technology and health related data against marginalised or criminalised communities. In the US, police have been reportedly obtained Facebook chat logs to prosecute abortion seekers. In the UK a woman was sentenced in the UK for taking abortion pills beyond the legal limit and during the court proceedings, it emerged that the police had obtained Google searches made by the woman, such as: 'I need to have an abortion but I’m past 24 weeks' and 'Could I go to jail for aborting my baby at 30 weeks.' 

Nefarious actors can also co-opt digital tools to exploit and restrict access to healthcare. In the US Crisis pregnancy centres have been documented using geo-fencing, which can reportedly tag and target anti-abortion ads to the phones of people inside reproductive health clinics, and the creation of fake websites that merely give 'the impression' of offering objective counselling and information about pregnancy options. curtail women’s access to reproductive rights . 

The UN Special Rapporteur on the right to health has remarked

Poor, minority racial and ethnic communities are disproportionately targeted and subject to surveillance, which could be exacerbated where health status is criminalized. That leads to them being disproportionately represented in the criminal justice systems of States and such individuals often face harsher punishments as a result of racial profiling and overpolicing compared to more affluent communities” (para 62).

Long Read

Governments around the world are increasingly making registration in national ID systems mandatory for populations to access social benefits, healthcare services, and other forms of state support. By virtue of their design, these systems inevitably exclude certain population groups from obtaining an ID and hence from accessing essential resources to which they are entitled.

Discrimination and exclusion by design

Digital health solutions also risk exacerbating discrimination and exclusion. As the UNSR on the right to health cautioned, digital technologies might further exacerbate issues of equality and autonomy, ‘with greater risks for youth, marginalised people and criminalised groups,’ and may inadvertently restrict access to specific groups.

Healthcare datasets are also historically skewed, containing disproportionately more data about white urban men than other, underrepresented minority groups. New treatments and interventions based on these datasets may thus risk being biased and discriminatory: people may be excluded from new healthcare advances if they are excluded from healthcare data

Some digital tools also involve creating databases based on categorisations of selected criteria like gender, ethnicity, race, nationality or legal status to serve as a basis for healthcare decisions. These filtering categories risk discriminatory exclusion, whether the decisions are ultimately made by healthcare professionals or automated algorithms, which can similarly reproduce human rights violations based on biased, exclusionary datasets.

Data-driven solutions also risk creating new barriers to access healthcare that can be enforced by governments. For instance, some governments might require national IDs, supported by a digital ID system, as a precondition for access to care, which can exclude vulnerable groups like asylum seekers who do not have national identification. The UN emphasises that individuals should not be denied services for this reason

The digital divide

There is also the broader issue of exclusion referred to as the “digital divide.” While digital tools might increase accessibility for some, it can create barriers for others. Access to the Internet is essential for almost all digital health tools, but according to UN Deputy Secretary-General Amina Mohammed in 2021, “almost half the world’s population, 3.7 billion people, the majority of them women, and most in developing countries, are still offline.” While digital tools may improve access to healthcare in some regions of the world, many communities are still left excluded from these digital solutions. The UNDP warns that “relying on digital technologies as a primary system or strategy within the health sector may impact access and availability, and inadvertently exacerbate inequalities, contributing to the digital divide." This consequently deepens existing inequalities in access to care and entrenches power imbalances as data and resources are extracted from the global South to serve the needs of the global North.

The digital divide is also not only a global divide but a local one. Digital health benefits mostly middle- or high-income sectors in large urban areas, and a 2023 UK-based study found that digital health services are being designed and implemented without consideration of the views of those in the country who are “digitally excluded,” widening the gap in healthcare access for those already experiencing poverty and facing barriers to communication with clinicians. 

If health solutions are increasingly migrated to a digital sphere, how do we ensure the right to health for those excluded from this sphere?

Advocacy

Privacy International’s submitted its input to the UN Special Rapporteur on the right to health for her forthcoming thematic report to the Human Rights Council on the theme of: “Digital innovation, technologies and the right to health”.

The role of Big Tech

These risks posed by digital health initiatives are further complicated, and even worsened, by the involvement of private companies who are developing the technologies in question or when entering the healthcare sector as providers themselves. Big tech companies like Google, Microsoft, Apple, and Amazon are all rolling out data-driven solutions for the healthcare sector such as in the form of wearables that track health data; but as these private companies provide tools for a sector that has traditionally been a public service, an important question arises of who has access to the data stored or processed by these technologies. 

Conclusion

There is no doubt that technology can help tackle some key challenges in the provision of healthcare services and empower patients. To do so, however, these digital tools must be designed in a way that actually benefits and upholds individuals’ rights to health, privacy and wider rights.

We continue to raise the need for a comprehensive human rights-based approach in the design and deployment of digital health initiatives and technologies and for clear and enforceable regulatory mechanisms to understand the risks and provide sufficient safeguards. Designers and engineers should adopt a human rights by design approach, in consultation with the meaningful participation of the health and medical community, including patients, where these tools are intended to be deployed. Digital health tools should first and foremost empower patients, not merely automate mass data collection. PI has articulated these recommendations in our submission to the UNSR on health