From fossil fuels to artificial intelligence, public money often helps build powerful industries before accountability catches up.
Credit: Arvind Vallabh / Unsplash.
As governments rush to adopt artificial intelligence in public services, Privacy International asks who controls these systems, how people’s data is protected, and whether public services are becoming private black boxes.
From fossil fuels to artificial intelligence, public money often helps build powerful industries before accountability catches up.
Credit: Arvind Vallabh / Unsplash.
Between the UK government’s announcement that it will commit £1b to AI-related infrastructure, the EU’s launch of ‘InvestAI’ to mobilise EUR200b of investment in AI and the US Pentagon offering $200M contracts to AI companies like Anthropic (then OpenAI), Google and xAI, we are seeing a rapidly growing fervour among national governments to reap the perceived promises of artificial intelligence (AI). This government buy-in of AI in the form of massive investments and regulatory capture recalls previous trends of governments buying into and subsidising the oil and gas industry, the tobacco industry and the coal industry, often at the detriment of its own citizens. So, are we to wonder: is AI the new oil?
While some have previously responded to AI as regulators (e.g., the EU and the EU AI Act), the pro-regulation tides are shifting under threat by the AI race. The US and increasingly the UK are responding as customers and business partners, and the EU is caught in the middle of falling behind home-grown AI development at the risk of relying on AI developed on U.S. soil, while also juggling its commitments and cultural identity of regulating the harms of AI. These ‘GovTech’ relationships are still in their nascent stages, and only time can tell how they will play out in practice. However, it is precisely at the genesis of this GovTech era that we should scrutinise the potential risks and harms to privacy that government uses of AI could create.
In this article, we will discuss what we know so far about the state of GovTech and the larger concerns of what we don’t know. We conclude with precautions about the lack of transparency and lack of standardised deployment guidelines for governments utilising AI, with a mind for the geopolitical influences behind this GovTech advancement and the collectivised market power of the top companies dominating the AI market.
The UK’s plans for AI, as articulated in its AI Opportunities Action Plan, are aimed at driving economic growth in the country and the performance of public services (e.g., providing administrative tools for civil servants). This plan covers a wide range of departments and emphasises the government’s urgency to:
The UK government has already been quick to build relationships with third-party vendours, such as signing a deal with OpenAI and establishing a Memorandum of Understanding between the Department for Science, Innovation and Technology (DSIT) and Anthropic - both to enhance public services. This comes as the government has already been developing an internal arsenal of AI tools, such as:
DSIT categorises these AI products into two uses: productivity tools (like ‘Consult’) and policy tools (like ‘Extract’). Many of these tools are built in-house, but this could include some use of existing APIs (application programming interface) from third-party providers such as OpenAI. Representatives from DSIT describe these partnerships with vendours as ‘collaborations’ and not as ‘procurement’, as they intend to scale up the tool rather than use it as it is off the shelf.
In the EU, we are observing an interesting movement in its regulatory appetite. As one of the earliest entities to initiate legislation around AI, the EU has since shifted into a deregulatory and ‘pro-innovation’ approach. At the AI Action Summit in February 2025, European Commission President Ursula von der Leyen announced the launch of InvestAI, an initiative to mobilise 200 billion euros for investment into AI, particularly the financing of four AI ‘gigafactories’ in the EU for training complex AI models. This suggests the EU’s growing inclination towards ‘homegrown’ sovereign AI rather than the risk of relying on U.S.-grown Big Tech. Already the Commission has announced seven AI Factories worth 10 billion euro, which is apparently the largest public investment in AI in the world, ‘and will unlock over ten times more private investment’.
It remains to be seen how these AI investments - and national AI strategies that member-states like France plan to release - will manifest in practice alongside the phased enforcement of the EU AI Act. This could be a promising scenario to assess how technology can be developed when policy has been prescribed alongside it rather than after it.
The American playing field offers some insights into the GovTech wave as well, especially as pertains to third-party vendours. The U.S. government has been quick to pour investments into top AI players like OpenAI, Anthropic, xAI/Grok, Google, Scale AI and Microsoft.
The U.S. General Services Administration (GSA) announced the launch of USAi in August 2025, ‘a secure platform designed to let employees experiment with popular AI models made by OpenAI, Anthropic, Google and Meta’. The stated purpose of USAi is to allow government employees to voluntarily experiment with an approved suite of AI tools to make their workflows more efficient. According to the USAi website, one agreement with them allows access to multiple AI providers that they have vetted against federal requirements, ‘replacing the need for separate contracts, security reviews, and compliance checks with each vendor’.
The US Federal Government already has requirements for data processing as outlined by the National Institute of Standards and Technology (NIST) in their Federal Information Processing Standards (FIPS) framework, though it remains unclear how these new agreements coalesce with those existing obligations.
Claude, Gemini and ChatGPT were also added to the GSA’s Multiple Award Schedule (MAS) Programme, which:
In effect, being an MAS contractor acts as a seal of approval for federal buyers looking to adopt the technology so that the buyer does not have to go through any federal compliance checks themselves and can procure straight from the MAS list. Interestingly, OpenAI and Anthropic had announced they would sell their services to the government for as low as $1, which:
Doing this cements companies into the governments’ ecosystem in ways similar to how IBM, Oracle and Microsoft have done previously in government. By becoming the incumbent system, particularly if proprietary features are added, it makes migration to alternatives challenging and problematic.
Recent geopolitical intertwining between the U.S. and the UK adds some interesting texture to the developing GovTech space, too. U.S. President Donald Trump’s state visit to the UK in September culminated in a Memorandum of Understanding between the U.S. and the UK for a ‘Technology Prosperity Deal’. The MOU affirms both countries’ ‘common desire to enhance cooperation in science and technology matters that support initiatives of mutual interest that produce tangible benefits to their citizens’. This union between the UK and U.S. perhaps represents an urgency, or at least unified agreement, to advance a pro-deployment rhetoric over heavy-handed over-regulation. Among other provisions, the MOU intends to:
The argument made for deploying AI in the civil service is that it could help cut down on public spending and boost efficiency by better streamlining tedious administrative tasks. However, UK shadow science secretary Alan Mak cautioned that economic mismanagement could preclude any promised benefits. If a council website deploys an AI chatbot to service constituents who opt to speak with a human agent anyway, is the procurement and deployment of that chatbot a responsible use of taxpayers’ funds?
In parallel to these GovAI initiatives, there is a rapid and largely opaque race within military and defence administrations. The recent fall out between Anthropic and the US Department of War has illustrated how the same frontier models are deployed for war and can raise serious concerns around control and accountability. Yet, despite this technological convergence, policy initiatives aimed at regulating for civilian public services often deliberately exclude military and defence uses—most notably under the EU AI Act. This largely artificial institutional separation is not only problematic from a regulatory perpective; it also entrenches the militarisation of technology by masking how AI developed under logics of conflict, secrecy, and exceptionalism can spill over into civilian governance, normalising intrusive practices and undermining citizens’ rights, accountability, and democratic oversight.
Given the current state of play, what we’re concerned about is how little we actually know about how these AI tools work and what their long-term impact will be to citizens, especially as governments have access to troves of sensitive personal data in order to further develop and fine tune the technology. How are government AI models storing and processing data about public servants, citizens and/or residents of a country, whether in experimental sandboxes or beyond?
There are merits to streamlining tedious administrative tasks, but there are still big questions that must be answered around data privacy, transparency and automated decision-making if the government is about to promote and embed wide-spread agentic AI use for civil services.
On the data privacy front, several questions arise:
Data protection in the govAI context is especially important when we consider how highly sensitive, and sometimes secret, government data might be that could end up being processed by a black box algorithm - e.g., where an employee or colleague lives, any confidential documents that appear in meeting notes, etc.
To understand the data privacy concerns embedded in AI, we first provide a quick explainer about how AI is developed and fine-tuned. Recall that large language models (LLMs) are advanced machine learning models, such as chatbots or AI assistants, designed to understand and generate content like a human being. LLMs are trained on huge troves of data in order to learn complex patterns in language to perform a wide variety of tasks, such as delivering chatbot responses, predicting text in email-writing, or summarising textual or verbal material.
In the context of government-deployed AI, the personal data of public servants, citizens and residents are crucial building blocks from the conception of the AI, to the fine-tuning of the AI, especially if a government service continually retrains its in-house AI.
The answers to the above questions on data privacy may vary depending on what purpose the AI tool is serving and what types of data the tool is processing. An AI tool that organises constituents’ emails or sorts them based on household data may require a higher level of scrutiny than an AI tool that simply processes admin data (e.g., a civil servant using an AI assistant to schedule calendar meetings). Nonetheless, it is crucial for governments to clarify exactly how individuals’ data is being processed, particularly how long data is retained by the AI system and whether this data is being held for further fine-tuning. Deployers (i.e., government deployers) should also be clear about what purposes the AI serves in order to more clearly outline the type of data being processed for what purpose. This can also help risk mitigation and data orchestration efforts to ensure that other types of data not covered by these purposes are not somehow being processed by this AI tool.
It is also important for governments to have robust contractual safeguards in their procurement agreements with third-party vendours that clearly and adequately protect subjects’ data from being accessible by non-government entities. These contracts should also include strict interoperability and transferability clauses in case of termination to avoid lock-in effects.
As we have observed already, the privatisation of public responsibilities can be deeply problematic if deployed without the safeguards required to ensure human rights are not quietly abused. Private companies have been known to play with the limits of what can legally and ethically be done with individuals’ identities and data, without the same level of accountability required of public authorities – a significant affront to fundamental rights when used to deliver a public service. We had identified a number of issues common to public-private partnerships that involve surveillance technology and/or the mass processing of data. To address these issues, we have defined corresponding safeguards that we recommend for implementation by public authorities and companies who intend to enter into such partnerships that could inform GovAI agreements with companies.
There is little information right now about how government AI tools are properly safeguarding individuals’ data. For instance, when ‘Consult AI’ is processing consultation submissions, how is the LLM processing and storing personal data about the individuals who submitted the consultation or any personal data contained in the text of the submission? Or, if a city government is using a third-party AI tool for administrative work, such as San Francisco City Council using Microsoft Copilot, what happens to the personal data that might be contained within emails? Does the third-party AI vendour have any access to that data? How long might they retain that data and for what purposes, beyond legal compliance obligations?
There have been attempts to alleviate public concern over data protection and AI tools. San Francisco city government’s use of Copilot is guided by its Generative AI Guidelines and the city’s AI Transparency Ordinance, which ensures ‘policymakers and the public have full visibility into the city’s use of AI tools, risks and safeguards’, including ‘enterprise-grade data protections’ and an intention to publish ‘department-submitted AI Inventory responses’. USAi, which is powered by a combination of third-party vendours, allegedly builds on cloud infrastructure that the GSA manages so that agency data ‘does not feed back in to train the companies’ models’. OpenAI’s Enterprise Privacy Policy, which services commercial businesses that use OpenAI’s API, maintains that OpenAI does not train on any of its clients’ data, and that it stores business’ data for up to 30 days purely for the purposes of legal obligation or resolving reports. Earlier this year, OpenAI launched its ChatGPT Gov for U.S. government agencies similar to the way the company offers ChatGPT Enterprise to businesses. It is likely that the existing data protection safeguards in place for ChatGPT Enterprise apply for ChatGPT Gov, as there did not appear to be further information about how ChatGPT Gov might have special data protection obligations in the context of handling government data. Governments will have legitimate concerns about where data of their citizens is processed and held, a discussion which is particularly relevant in European discourse, in regards to data sovereignty and compliance with GDPR.
There is also the added question of data security when AI systems require significant computing power and data centres that only larger providers can afford to provide. If a local council explores a third-party vendour to supply a constituent-facing chatbot, does this mean personal data on residents could reside in the data centres of private AI companies?
Of course, this latter point is not dissimilar to what we’ve already seen with third-party cloud providers. Due to the high compute power required for the normal functioning of today’s more advanced web and digital systems, cloud service providers, particularly big ones who can afford to run these huge centres like AWS and Azure, are necessary to the functioning of any online service.
But AI adds existing and new levels of concern around data protection, because not only might data be collected, stored and retained; data might also be continually fed back into an AI model for fine-tuning, and there is still not enough information about whether this data may reappear elsewhere and how protected it really is.
Data protection safeguards and mechanisms are crucial if governments are to deploy AI internally so quickly and widely. These attempts across different global governments at addressing data protection safeguards should be standardised to a stricter degree that residents can trust, whether this means a stricter application of GDPR to AI systems or new and more robust impact assessments that addresses AI-specific harms.
These data privacy questions lead us to further questions around transparency and legal basis for processing:
Transparency disclosures might be more straightforward in some scenarios than others. For instance, if a government service is deploying an AI-powered chatbot to field citizens’ queries, the chatbot might display a disclaimer that says ‘You are interacting with AI’. However, disclosures for other uses like if a civil servant is using AI to bulk-sort and process constituent queries might not have as straightforward a way to notify the affected individuals (constituents) about how their personal data is processed.
There is also the explainability concern about the AI tools being used - whether it’s an in-house tool developed by the government or a third-party vendour like Copilot, how explainable is the tool such that a civil servant using the tool could explain why certain actions or decisions have been made by the AI (e.g., why sort this constituent’s response into a certain folder, why summarise meeting minutes in such a way that prioritises some information over others)? The UK’s Government Digital Services (GDS) sets a helpful starting point by making some of their code publicly available on Github. However, simply seeing the code might not be enough; seeing technical lines of code does not always mean the affected individual will be able to translate and understand why a certain decision about them or response to their query was made.
It is also difficult for individuals to be able to exercise rights to redress or remedy if they do not know how AI is using their data and in what stage of the internal pipeline (i.e., what decision-making parameters to challenge). In order to secure our digital lives, we need to know how the AI systems work and how they treat us – especially when these tools are deployed for access to essential public services.
This is not so much an articulation of bad practices than a call to action to keep a critical eye on the rising govAI wave and its impacts on our day-to-day lives.
As of September 2025, we’re seeing a lot of fervour and even reckless championing of AI by governments all seeking to be the first to ‘innovate’ with this shiny new technology while lacking the commitment to ensure citizens are protected in the process. Of course, there is certainly merit to wanting to drive efficiency in day-to-day workflows - and merit to migrating outdated legacy tech to newer, possibly more secure technology - that will benefit both civil servants and residents. But that’s the catch - there must be benefits, not a web of added risks that are complex to patch, as is the case when a novel black-box technology runs loose.
If governments are to utilise AI, they must be prepared to address the related data protection concerns stemming from AI and how they intend to ensure transparency and informed consent in the deployment of AI.
Of course, government deployment of technology is not new, as we’ve seen governments justify which email provider to utilise and even what hardware employees should use. This has backfired in some cases, such as when the European Data Protection Supervisor (EDPS) found in 2024 that the European Commission’s use of Microsoft 365 did not comply with data protection laws for reasons including transfers of personal data and not sufficiently specifying what types of personal data are to be collected and for what specific purposes (this ruling has since been remedied in July of 2025).
Tides can also change very quickly and unpredictably - Anthropic had once been the government’s first, go-to contractor for generative AI services, but they’re now considered a risk by the US government following a breakdown of the relationship between the company and the government.
On the one hand, we are seeing efforts by some governments to accompany their deployment of AI with at least some regulatory safeguards, such as San Francisco City Council and USAi. However, we have also seen far too many situations where these proclamations are merely symbolic attempts to appease the public where in reality enforcement falls through the cracks.
Consequently, we have important lessons for how governments should shape their procurement relationships with third-party vendours that align with best practices. Not only will this underscore trust among the public, but also the integrity of their systems. ‘Pro-innovation’ is being thrown around constantly in MOUs and public announcements, but it inaccurately implies that regulation is anti-innovation. Regulation simply exists to ensure that human rights are upheld by the governments that deploy those technologies, so that technology can be used responsibly to advance human wellbeing rather than set us back - but with a shinier new tech on the shelf. Responsible AI is true advancement and innovation.
With all this in mind, we keep a pulse on what’s coming next: