Why we need to talk about digital health
In this piece we outline the main discussions and measures we need to see being systematically adopted to inform decision-making about digital solutions in the health sector, and provide examples of where these were not integrated in decision-making processes and with what consequences.
- There is limited consideration of the broader human rights implications and in particular the right to privacy beyond data protection compliance issues.
- The Covid-19 pandemic has propelled this discussion further with an unprecedented deployment of digital health technologies rapidly within a short period of time.
For over 20 years with the start of the first use of ICTs in the 1990s, we have seen a digital revolution in the health sector. The Covid-19 pandemic significantly accelerated the digitalisation of the health sector, and it illustrates how fast this uptake can be and what opportunities can emerge; but also, importantly, the risks that it involves.
As we've said many times before, whilst technologies can be part of the solution to tackle some socio-economic and political challenges facing our societies, these are not risk-free - nor can they be relied on as the only piece of the puzzle.
The Covid-19 pandemic has propelled this discussion further with an unprecedented deployment of digital health technologies rapidly within a short period of time.
And yet we're still at a stage where human rights approaches to digital health are lagging behind, and in practice human rights considerations inform very little the initial decision to adopt a digital solution, yet alone to inform its design and implementation.
In this piece we start to outline the main discussions and measures we need to see being systematically adopted to inform decision-making about digital solutions in the health sector, and provide examples of where these were not integrated in decision-making processes and with what consequences.
This piece is based on observations made by PI and our global partners exploring digital health initiatives. In particular we would like to thank El Instituto Panameño de Derecho y Nuevas Tecnologías (IPANDETEC) in Panama, The Kenya Legal & Ethical Issues Network on HIV and AIDS (KELIN) in Kenya, the Centre for Internet and Society (CIS) in India and Asociación por los Derechos Civiles (ADC) in Argentina for sharing their insights with us and their efforts to shape this policy area alongside us.
Is digital the solution?
We have observed a lack of questioning around the why (use tech and data), and a shift to a default how (do we make use of these technologies). The problem needs to be flipped back to: what is the problem identified, i.e. what is the need, and what are the solutions. There won't ever be just one single one-size fits all solution. The real question is therefore: what do beneficiaries really need?
We are not seeing (enough) this sort of questioning in public discourses and decision-making processes, and often the digital initiatives that end up being adopted are disconnected from the ecosystem in which they will be introduced.
There are several assumptions which underpin the blind confidence that technology will enable the provision for better care and to more people. As can be seen from the examples below, many of these assumptions are misguided, and sideline the primary concern of whether adopting a given digital solution will lead to more effective delivery of quality care.
India: Mother and Child Tracking System
There is an assumption about the efficiency promised by suppliers of such technologies and those driving digital health as a positive narrative for development and prosperity, despite the recognition that digital and paper-based systems can be compatible, and even in settings where digital infrastructure is still developing, where data entry can take precious time away from care and treatment. India's Mother and Child Tracking System (MCTS) is an example of that as documented by the Center for Internet and Society (CIS). The MCTS is a system that collects vast amounts of reproductive health data about pregnant women, children, and families during pre- and post-natal care. This includes antenatal check-up charts, immunisation information, data of all visits, family planning and counselling requirements for sterilisation, as well as associated welfare schemes. It is an initiative by the Ministry of Health and Family Welfare in India, and was first trialled in 2009. It was then rolled out nation-wide in 2011. Data for the Mother and Child Tracking System is collected manually at healthcare facilities. The data is then entered into the system by data entry operators who, based on that data, generate and distribute the system's 'work plans' to health professionals. This process is creating time lags between data collection and digitisation of up to 72 days, which negatively impact patients. Such delays not only call into question the effectiveness of the system, but also raise serious questions as to the safety of the data awaiting to be digitised, ranging from storage to access - as well as participating staff know-how and awareness of data protection obligations. As documented by CIS furthermore concerns also exist around the digitisation of such data, and also the with the regards to attempts by the Indian government to link Aadhaar to the Mother and Child Tracking System since 2015, for the purpose of authenticating beneficiaries and health workers at the point of service delivery and for distributing maternity benefits within the Janani Suraksha Yojana cash assistance scheme.
Global: the digital divide
One assumption is that those most in need of care have access to affordable digital devices to enable them to meaningfully engage with digital health systems. During Covid we have seen countless examples of digital initiatives to access care, including vaccination programmes such as the example of the mobile CoWin app in India, and to have access to vaccination status documentation after inoculation. However, making these digital only raises concerns about exclusion and discrimination because of the well-documented digital divide which means that those already marginalised will be further excluded including women, migrant communities and the elderly. Acknowledging that this is a risk and it should be mitigated, the WHO noted that key criteria for design of Covid-19 vaccination certificates included that their implementation should not "increase health inequities or increase the digital divide" and that the documentation "needs to be in a format that can be accessible to all, for example, in paper and digital formats. Any solution should also work in online and offline environments across multiple platforms – paper and digital." So if the justification for digital solutions is inclusion and tackling inequality, why is it that so many concerns about health digital technologies are associated with these very issues?
UK: Contact tracing data processed not used
Often when digital solutions are proposed there is a twin assumption, namely that there is a need to collect personal data and that there is a digital infrastructure that supports the public health efforts. In the UK, some of the measures implemented for contact tracing included an application where a person would scan a QR code at venues they would visit, and in addition some public places were asked to collect data about their customers as per government guidance. It later emerged that the Test and Trace not only failed to use COVID-19 data collected by venues, but it also failed to employ the QR code alert system which had been built into the contact tracing app to send an alert to anyone who checked into a venue with an outbreak.
Ensuring safeguards are in place
Prior to developing a digital health initiative, governments must be aware of their legal, regulatory and ethical obligations, and if such mechanisms are not in place, they must be adopted.
This process can be quite complex as there is a need to consider all the actors involved, their obligations and how to protect the rights of individuals, at every level of the chain within the digital health initiative. Given the extraterritorial nature of our digital ecosystem, ranging from the involvement of companies based elsewhere than where their products and services are deployed to globally accessible online databases, it is likely that some actors will be subject to different jurisdictions, and so have different and varying degrees of obligations, but also they could be subject to weak or no legal frameworks.
Such regulations are essential to setting up a framework for processing personal data where roles and responsibilities during the lifecycle of the data or system are clear, and also to minimise the risks such as mission creep or function creep, i.e. using data for a purpose or function for which it was not initially planned, as well as security concerns such as abuse of vulnerabilities in the system, unauthorised access and data breaches, among others.
Too often, governments jump to adopt digital solutions without ensuring the relevant regulatory mechanisms are in place, and where they are in place there is little evidence as to how they inform decision-making. This is an issue we see repeatedly across the world.
Kenya: Management of health data of PLHIV and Key Populations
Well before it adopted a data protection law, Kenya proposed several digital health initiatives which had severe implications for persons living with HIV (PLHIV) and other key populations including gay, bisexual and other men who have sex with men; women, men and transgender people who inject drugs, and/or who are sex workers; as well as all transgender people are socially marginalized, often criminalized and face a range of human rights abuses that increase their vulnerability to HIV. This included an approved plan to conduct a study of HIV and key populations and a proposed presidential directive to collect up-to-date data and prepare a report on all school-going children living with HIV and AIDS. Both initiatives would entail the processing of biometric data. Both initiatives were successfully challenged by CSOs, with the study removing the biometric marker following pressure from national groups of key populations and advocates and the international community (see more below), and the proposed directive being found unconstitutional by the High Court of Kenya and eventually dropped. While both initiatives were stopped in their tracks, it is unacceptable that they went that far in the first place, in the absence of a comprehensive data protection framework, and with weak enforcement of existing regulations specifically on health data and management of HIV programmes. While Kenya has since adopted a data protection law and further guidelines continue to be developed, KELIN still highlight that policy-makers are still failing to consider and promote data protection when considering digital health initiatives in Kenya.
Central America: Covid-19 digital solutions
IPANDETEC documented that Panama, Costa Rica and Guatemala were deploying digital solutions to respond to the Covid-19 pandemic without having the necessary legal and regulatory safeguards in place. Measures included digital apps for contact tracing and self-diagnostic, health certificates, and digital vaccination forms, amongst others. These digital health initiatives became tools for governments to establish control over the management of their citizens' health by making the use of digital identity mandatory to obtain certain services. For example, in Costa Rica, there is no formal obligation to use the Single Digital Health Record (Expediente Digital Único de Salud), an application of the Costa Rican Health Fund to access health services, and yet it is present in all Social Security Fund facilities so it must be used to access the service. Health-related personal data is of a sensitive nature, the processing of which requires clearly defined regulations and adequate safeguards. However, the legal frameworks of the countries surveyed did not provide the sufficient level of safeguards that health data should enjoy. Many of these initiatives were developed and deployed in a legal void. For example, the Government of Guatemala launched the "Alerta Guate" application to combat the spread of coronavirus in the country. After robust criticism from international human rights organisations concerned about the data protection safeguards provided by the application, the app was removed.
Argentina: Weak monitoring and oversight of digital health initiative
When exploring digital health initiatives in Argentina, ADC's initial analysis points to various laws and decrees aimed at effectively regulating personal data as well as sectorial regulation of the health sector and patient data - many of which are still in drafting stages and are not yet implemented -, but questions the extent to which these actually inform decision-making processes. In particular, they express concern for the proposed plan to establish the Single Federal Program for the Scanning and Digitalization of Health Records of Argentina, which raise severe concerns around the uses and purposes of this data, as well as data security risks. The current legal framework is weak and those responsible for its oversight and accountability, such as the data protection authority, have limited expertise and resources to effectively advise on the deployment of such a system in the health sector.
Jordan and Palestine: Digital maternal, newborn and child healthcare initiatives
The reproductive and maternal care sectors have also seen the use of data-intensive systems. Examples of the digitalisation of services in the reproductive and maternal care sectors range from facilitating scheduling through SMS to remote digital access to care and counselling using telemedecine, including the use by health workers of a mobile phone to monitor health data and lifestyle of the individual pregnant person over the cycle of their pregnancy, or a child over his/her cycle of immunisation; and the use of mobile applications, sensors, wearable devices, and more. Initial findings of research being undertaken in Jordan and Palestine indicate that various digital maternal, newborn and child healthcare (MNCHC) initiatives are being deployed, and yet there is little evidence of if and how those data processing activities are regulated. Where there are existing policies and regulations on data protection and privacy it is unclear how these inform the design and implementation of digital MNCHC initiatives as these often fail to integrate data governance, security and human rights standards and principles.
UK: mission creep of health data for law enforcement uses
One of the components of the UK's migration "hostile environment" policy has included data-sharing between health authorities and law enforcement teams. This has taken the form of using patient data collected for health purposes to trace undocumented migrants, and a broader system of entitlement checks and charging health services called "Status Checking" which was part of formal agreements between NHS Digital and the Home Office, the entity responsible for immigration enforcement. The reason why such a policy and such practices are detrimental is that they create distrust with health providers and may deter people to seek health care when they need it. In the UK, the hostile environment and data-sharing arrangements effectively build a new border in hospitals and healthcare providers, which poses a real threat to the health and wellbeing of people with uncertain immigration status. This has meant that in a situation like a global pandemic, those with current or prior uncertain immigration status have deterred from coming forward to seek primary care or to get vaccinated. This directly undermines public health efforts.
Singapore: breach of HIV data
In January 2019, it was discovered that the HIV-positive status of 14,200 people in Singapore, as well as their identification numbers and contact details, had been leaked online following a breach of the HIV registry managed by the Ministry of Health. But it would appear that the breach occurred well before this date. According to a statement by the Ministry of Health, records leaked include the details of 5,400 Singaporeans diagnosed as HIV-positive before January 2013, and 8,800 foreigners diagnosed before December 2011. Patient names, identification numbers, phone numbers, addresses, HIV test results and medical information were included in the information leaked by a former Singaporean resident. This had an intense impact on the LGBTIQ+ community, given than up until 2015 there was a total ban on HIV-infected people entering Singapore and on the right to work of foreigners with HIV status, with only a few exceptions.
What is the impact on people and communities
Despite abundant evidence of the risks and harms associated with digital solutions - not just from the health sector but others too -, thorough human rights impact assessments are not regularly undertaken to identify those who will be disadvantaged and to understand in what ways they will be negatively impacted, even as the health sector becomes increasingly reliant and is founded on digital solutions.
When such assessments are done, either they are not comprehensive enough to encapsulate human rights impacts beyond data protection, or they are not meaningful as the findings are not used to inform whether to proceed with the deployment of a digital solution in the first instance, and if it can go ahead how the impact assessment should inform and shape the design, implementation and maintenance.
Kenya: biometric health surveillance of key populations
Driven by a need for better data about persons living with HIV and key populations in Kenya, the National AIDS and STI Control Program (NASCOP) committed in 2015-207 to undertake a Integrated HIV Bio-Behavioral Surveillance study ("IBBS") to be financed by the Global Fund. The IBBS study would gather information about HIV incidence and prevalence, risk behaviors, intervention exposure, and more information useful to planning and evaluating the progress of HIV programs. The study would also include use of biometrics for registration of persons living with HIV and key populations who were part of the study. The plans for this study were approved by national health authorities but were not subject to broader consultation with the communities to be studied and relevant advocates. Failure to do so meant the design of the IBBS failed to carefully consider concerns about the invasive nature of biometric data processing, especially for individuals and groups who may be subjected to criminal prosecution for their real or perceived health status and/or sexual orientation/gender identity. The risks of exposure, (re-)identification, targeting and persecution were real and yet unaddressed in the ethical review when deciding to integrate biometrics within this study. Thanks to the quick response from national groups of key populations and their advocates along with support from the global health community, the Kenyan health authorities committed to undertake further consultations, and NASCOP finally agreed to remove the use of biometrics from the IBBS study protocol. This example illustrates the importance of undertaking a thorough human rights assessment of the impact of deploying a technology and in particular the importance of involving rights-holders, i.e. communities to be affected, in such processes to understand and take into account their perspectives and to integrate them into the design of such initiatives.
Global: identification as a pre-condition for access
As we've been documenting for years, governments around the world are increasingly making registration in national digital ID systems mandatory for populations to access social protection including healthcare. By virtue of their design, these systems inevitably exclude certain population groups from obtaining a digital ID and hence from accessing essential resources to which they are entitled. We have seen this play out in countries around the world including India with Aadhaar, Kenya with the Huduma Numba as well as Ndaga Muntu in Uganda. Research undertaken by Unwanted Witness, the Center for Human Rights and Global Justice and the Initiative for Social and Economic Rights documented how the current requirements to provide a national digital ID to access social rights led "to mass exclusion, shutting out as many as one third of Uganda's adult population", in particular the elderly and women, from accessing healthcare including reproductive and maternal health care.
Philippines: Covid-19 contact tracing apps
Following the spread of COVID-19, the Philippine government launched a contact tracing application, StaySafe.ph, which aimed to contain the pandemic in the country. Despite the Data Privacy Act (2012) mandating a higher level of security for the processing of sensitive personal data, and a circular of the National Privacy Commission on the security of personal data use by government agencies requesting them to undertake a privacy impact assessment, there was no indication that such an impact assessment was undertaken. The Foundation for Media Alternatives and other CSOs, including PI, called for the publication of this assessment if it had been indeed undertaken, but received no response to this recommendation.
Global: Covid-19 vaccination certificates
With the world continuing its efforts to respond to the Covid-19 global pandemic, vaccination programmes started to be rolled out (in some countries) since late 2020. They constitute a core element of a comprehensive public health response to the pandemic. But as pointed out by the World Health Organisation, the documentation associated with the vaccination status and verification trigger human rights implications and can expose individuals and groups to risks of exclusion, discrimination, amongst others. And yet we are seeing countries around the world rolling out vaccination certificates and verification schemes which are failing to identify and mitigate those risks.
Reigning in the role of industry
Industry has identified the health sector as a fertile ground for data exploitation. It plays different roles from providing tech "solutions" but also some of the bigger tech giants such as Google, Microsoft and Amazon are involved at different, complementary levels from infrastructure to data management, analysis and product development.
Whilst they have the tools and resources, with many having shaped their business models around data exploitation and surveillance, we need to ensure that whatever contributions companies make to the solution proposed, they serve to protect people, and they comply with their internationally recognised responsibilities under human rights standards. Not only should companies be transparent about how their business models operate in practice, i.e. the design of their systems, and the solutions they provide to governments, but these should also be firewalled from other areas of their business models and interests, and data derived from these processing activities should not be monetised.
UK: NHS and Amazon
In 2019, the then UK Health Secretary announced a partnership between the NHS, the UK health service, and Amazon to use the NHS's website content as the source for answers given by Alexa to medical questions. Whilst we welcomed the efforts to tackle different needs and facilitate access to information by patients, any private-public sector partnership needs close scrutiny especially if it implicates a company like Amazon with a poor record when it comes to protecting people and their data. This is why Privacy International demanded the publication of the contract between the government and Amazon, to truly understand the nature of the agreement. While the contract was published large sections of it were redacted, which led to file a complaint to the British Information Commissioner's Office to demand the full disclosure of the contract. In April 2021, the UK Information Commissioner's Office issued a decision on our complaint requesting for additional clauses of the contract to be published unredacted by the UK Department of Health and Social Care. Beyond the issue of transparency, the key concern around this partnership is the legitimisation of Amazon as an actor in health sector at the expense of the National Health Service.
Global: Covid-19 solutions
Since the start of the pandemic, companies all over the world have pitched data products, services & solutions to Covid-19 - from big tech to companies that might not be household names. Industry should consider whether they are profiting from the current health crisis, and so companies need to carefully consider their involvement in such public health responses. Companies have responsibility to respect human rights in all the activities they undertake, as recognised in the UN Guiding Principles on business and human rights. Key matters for companies to consider include the purpose of their engagement, whether it is lawful, whether they are being transparent, whether they comply with human rights, including data protection principles and standards. For example, it was recently reported that Palantir made £ 222 million in profits after being awarded an array of deals with the NHS, including a role in the NHS response to Covid-19 and a two-year deal to process patients' sensitive medical data. Some of Palantir's executive staff went on to work and be responsible for various initiatives managed by the UK Department of Health and Social Care, including the COVID Test and Trace scheme.
The need for a human rights approach to digital health
PI and our partners have come to the same conclusion as the public health sector: while some issues around equality and accessibility issues - such as the digital divide - are increasingly discussed with regards to digital health and health in general, there is limited consideration of the broader human rights implications and in particular the right to privacy beyond data protection compliance issues.
What is needed is a comprehensive human rights approach which provides a lens through which to assess developing threats that individuals are exposed to through the proliferation of digital health initiatives. Only by exploring the broader human rights implications from the right to health to non-discrimination and privacy, amongst many others, can we ensure that any proposed digital solutions integrate the safeguards and mitigation strategies necessary to protect people and their rights.
Some of the minimum requirements for governments would include but not be limited to:
- Undertaking Human Rights Due Diligence (HRDD) throughout the life cycle of the digital health initiative they design, develop, deploy, sell, obtain or operate. A key element of their human rights due diligence should be regular, comprehensive human rights impact assessments.
- Ensuring clear, precise and accessible regulatory and legal frameworks are in place:
- Human rights framework, including data protection law
- National digital health strategies, with explicit reference to existing obligations under national and international law
- Accountability including through independent regulatory mechanism and courts
- Establishing effective oversight of the role and responsibility of the private sector through regulated public-private partnerships
- Providing accessible and fair redress mechanisms for individuals and communities adversely affected, and organisations representing their interests
- Increasing understanding amongst health sector of the implications of digital solutions, in particular their limits and associated risks, and how these should inform decision-making from inception to design, implementation and monitoring.