
Photo by National Cancer Institute on Unsplash. Photographer Daniel Sone.
The UK Government heads down a strange path of accepting AI investment from US firms.
Photo by National Cancer Institute on Unsplash. Photographer Daniel Sone.
As governments consider adopting AI for public services, one particularly tricky area is the use of AI on health data. Patients’ data must be treated safely, as trust is essential to healthcare. In the rush to deploy AI, however, it’s too tempting to side-step real safeguards.
In this regard, the UK Government seems to be heading down a strange path. As it seeks to supercharge its AI capabilities (including within the National Health Service), the Government risks undermining patient trust in the healthcare system it is trying to improve by outsourcing functionality to US companies. The increasing involvement of multinational commercial entities in UK healthcare prompts real concerns over how sensitive UK health data is going to be handled.
Last week saw a huge boom for the UK’s technology sector, with American tech firms (including Google, OpenAI and Microsoft) collectively investing billions of pounds into the UK’s artificial intelligence (AI) and tech infrastructure. The UK government hailed these investments as signifying a ‘generational step-change’ with the UK’s relationship with the US, describing the investments as an element of a new ‘Tech Prosperity Deal’.
A key area mentioned as part of the Tech Prosperity Deal is healthcare. In July of this year, the UK released its 10 year health plan, which emphasised the centrality of technology, innovation and AI in the Government’s plans for the National Health Service’s next decade. Critically, the plan stated that to move the NHS to the 21st century, its unique advantages will be used, including the NHS’s ‘world-leading data’. Something that has also been suggested by political commentators as an important step in fuelling the UK’s AI investment.
The NHS has one of the most comprehensive health datasets in the world. It is composed of data collected at patient-level, repeatedly over a long period of time (potentially a patient’s entire lifetime). This provides extensive insight into a patient’s health and makes it incredibly valuable. As an intangible asset, NHS data has been valued in the billions, and it is considered capable of creating enormous benefit for the NHS and patients if utilised carefully.
However, sharing the NHS’s data with US companies to prompt healthcare innovation has been marred by controversy in recent years. There have been lawsuits in relation to data-sharing schemes between the NHS and large US technology companies, including recent investors, Google and Palantir.
Palantir, a US data analytics firm that handles immigration enforcement data for the US immigration and customs enforcement agency (‘ICE’), among other clients, was awarded a £330m contract to create a ‘federated data platform’ for the NHS in 2023. Concerns have been raised by the British Medical Association about how Palantir is processing sensitive user data, arguing that Palantir’s processing of UK patient data lacks transparency and that Palantir’s other military and immigration enforcement operations were incompatible with healthcare values and may undermine patient trust.
These concerns are not unfounded. Our work about Palantir’s and other companies’ military operations examines how the logic underpinning conflict technologies makes them fundamentally incompatible with those designed for health services. This therefore is a moment for Governments to think differently about who they do business with, rather than rush into investment deals.
The uncertainty around companies’ treatment of patient data has generated controversy in the past and remains an issue now. Governments must put in place strong safeguards. The UK Government has published guidance, such as the code of conduct for data driven health and care technology to address concerns around the risks created by the digitalisation of healthcare. However, this code acts more like guidelines than a concrete set of protections. As a result, uneasiness remains about the potential commercialisation of UK patient data and the level of control UK citizens will have over the use of their health data by companies.
There are many great technological developments that can improve healthcare in the UK, including through the use of AI and patient health data. Nevertheless, it is essential that the price for achieving such technological innovation is not undermining UK patient’s rights, particularly to privacy and data protection. In the wake of a wave of new tech investment in the UK, we must remember that while patient health data holds great value for companies as an asset, it is nonetheless composed of individuals’ sensitive health conditions and private information. It is also important to remember that even where data is anonymised, it is not entirely secure from re-identification and therefore continues to present risks to privacy, even where attempts at anonymisation are used.
Fundamentally, health care data should always be processed transparently, confidentially and with patients’ fully informed consent. These considerations are foundational to healthcare. While healthcare can and should pursue innovations in technology to better treat patients, companies should not forget that patient’s agency over their health data is paramount and not jeopardise that in the race to develop new medical solutions.