
Big Tech’s bind with military and intelligence agencies
Big Tech is now in the business of enabling government surveillance. It was just a matter of time they’d be tools of war and abuse.
- Tech firms with government contracts need to know that they’re potentially enabling abusive surveillance and other human rights abuses.
- With the news that Microsoft ceased some of its services with the Israeli government, we must push this conversation forward.
- As we learn more cases of intelligence agencies and militaries using cloud services, Big Tech cannot continue to turn a blind eye.
- By design, it is hard to govern these agreements.
- Terms of services and ‘ethical principles’ will not help these companies avoid complicity in human rights abuses and atrocities.
- Industry will have to choose to either cease contracts with government military and intelligence agencies because auditing is not possible, or monitor how their services are used in compliance with human rights.

In their gold rush to build cloud and AI tools, Big Tech is also enabling unprecedented government surveillance. Thanks to reporting from The Guardian, +972 Magazine, Local Call, and The Intercept, we have insights into the murky deals between the Israeli Government and Big Tech firms. Designed to insulate governments from scrutiny and accountability, these deals bode a dark future for humanity, one that is built using the same tools that once promised a bright, positive world.
On 25 September 2025, Microsoft announced the cessation of some of its deal with the Israeli Government. While this is a positive development, it also exemplifies the looming dangers of Big Tech working with governments.
Even as Microsoft partially acted, the Israeli Government apparently moved the data to Amazon. Meanwhile Google still has a multi-billion dollar deal with Israel, under the Google & Amazon supported Project Nimbus.
Society is seeing a sea change in industry’s capacities to exploit data. And they’re building it for governments too. Unless something changes soon, we will find governments everywhere capable of vast unfettered surveillance capacities, all because investors thought it would be clever to build server farms and AI tools.
As government power is increasingly enhanced with cloud processing, transparency and accountability must also follow. We warned about this back in 2021 when we learned the UK government’s spy agencies began contracting with Amazon. These public-private surveillance partnerships, often secret, are hazardous to the protection of rights, granting both governments and companies access to immense power while being shielded from accountability. Amazon was the go-to cloud provider for the world’s intelligence agencies, with the CIA agreeing its first contract in 2013. The market options subsequently expanded, and so did the CIA deal as it extended in 2020 to include Microsoft, Google, Oracle, and IBM. We warned then as we warn now: industry has to answer for itself which state security services it would be prepared to work for.
But then came AI tools, and the dangers grew exponentially.
What we know now about Israel’s surveillance infrastructure
On 6 August 2025, The Guardian, +972 Magazine, and Local Call reported that Israel’s Unit 8200, an intelligence unit of the Israeli Defense Forces (IDF), was storing 8000 TB of data in Microsoft’s Azure, to retain data and analyse using ‘AI-driven techniques’. The data was predominantly from the West Bank, but had been reportedly “used to research and identify bombing targets in Gaza.”
The Israeli intelligence unit was collecting millions of phone calls each day, since 2022. Its organising mantra was reportedly “A million calls an hour”, motivated by the ambition to “track everyone all the time”.
The vast storage would allow the intelligence unit to retain past conversations so that they could ‘go back in time’ and retrieve prior conversations of people who become of interest.
The IDF’s response to the original disclosure of Microsoft’s role in the mass surveillance operation was that its work with companies like Microsoft was based on “legally supervised agreements”, that IDF “operates in accordance with international law”.
What we know now about Microsoft’s role in it
On 25 September 2025, Microsoft’s Vice Chair & President announced that Microsoft would cease and disable ‘a set of services’ to a unit within Israel’s Ministry of Defence, because it “found evidence that supports elements of The Guardian’s reporting” about how Israel’s Ministry of Defence was using Microsoft’s Azure storage and AI services.
It’s reported that Microsoft informed the Israeli government that Unit 8200 had violated its Terms of Service by storing surveillance data. Microsoft states that their Terms of Service “prohibit the use of our technology for mass surveillance of civilians”.
This marks a significant change of Microsoft’s behaviour. When concerns were previously raised with Microsoft, the firm ordered a study and found “no evidence to date” of any such abuses. Microsoft stated that it had “no information” about the kind of data stored by Unit 8200 in its cloud.
As much as we welcome this announcement from Microsoft, greater transparency is needed. Both industry and governments appear to rely on these contracts as safeguards against abuse. We need to verify all these safeguards. From what we know of other contracts, we fear that Microsoft, as it continues its overarching contract with Israel, is prioritising protecting its relationship with Israel over its commitments to privacy and human rights.
What we also know: it’s an industry-wide problem
This isn’t just about Microsoft.
Since 2020 the Israeli Government had contracts with Google and Amazon. The Intercept’s analysis of Google’s internal documents revealed Project Nimbus’ $1.2bn contract was split between both firms, that it could potentially generate over $3bn between 2023-2027, and that the contract could be extended to 23 years.
As with the IDF response to the Microsoft story, Google initially claimed that the deal with Israel was bound by its Terms of Service, prohibiting deprivation of rights, injury or death or other harms. Google highlighted to staff as late as October 2024 the importance of these Terms of Service, when staff asked how this was consistent with its AI Principles document, that forbid uses that “that cause or are likely to cause overall harm,” including surveillance, weapons, or anything “whose purpose contravenes widely accepted principles of international law and human rights.” (Months later, Google deleted those mentions of surveillance and weapons from their principles).
But The Intercept had access to internal Google documents also showing that the deal with Israel included ‘adjusted Terms of Service’, and that a Google lawyer warned the company in 2020 that the Israeli “government has unilateral right to impose contract changes”, and Google would retain “almost no ability to sue [Israel] for damages” stemming from “permitted uses … breaches.”
Google accordingly promised its staff that Project Nimbus is “not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” According to the Israeli contract document, however, the government “may make any use of any service included in the supplier’s catalog of services.”
In May 2025, The Intercept analysed another Google internal report that admitted that Google could not control what the government or military did. The report noted that Google was “not permitted to restrict the types of services and information that the Government (including the Ministry of Defense and Israeli Security Agency) chooses to migrate.”
Additionally, like Microsoft, Google was not in a position to monitor the use of their cloud services.
We’re left with the questions: what about Google’s and Amazon’s contracts with Israel? And what do other contracts with other governments contain?
Key Challenges
Cloud containers are hard to monitor
At PI we’ve always described ‘cloud’ storage to be akin to ‘storing things on other peoples’ computers’, and thus inherently unsafe.[1] Industry has worked to reverse this image by arguing that their computing environments are safe. When Microsoft announced the cessation of some of its service to Israel, the firm even referenced ‘privacy’ to argue that they protect the confidentiality of Israeli government data on Microsoft’s servers.
It is therefore odd that, when previously faced with criticism for their contract with Israel, Microsoft commissioned a report that they said had found no evidence of ‘failure to comply with Terms of Service’ or used to ‘target or harm people’. How would Microsoft even know if their cloud service was used in this way if they did not breach the privacy of its government client?
Microsoft’s most recent announcement admits that there were some unacceptable processing activities, but they also claim they did not breach the privacy of the service in order to determine this. Rather the firm stated “[a]t no point has Microsoft accessed [Israel’s Ministry of Defence]'s customer content. Rather, the review has focused on Microsoft’s own business records, including financial statements, internal documents, and email and messaging communications, among other records.”
And so, are they unable or unwilling to query the processing that takes place in cloud containers when it comes to government contracts?
Governments are tough clients
It isn’t quite clear that industry can rely on the behaviours of its government clients, the law, the terms of services, or its relationships.
The IDF claimed initially that its work with companies was based on “legally supervised agreements”, that IDF “operates in accordance with international law”. But it also stated that Microsoft “is not and has not been working with the IDF on the storage or processing of data.”
Yet in the reporting on Microsoft’s decision to alter the service to Israel also stated that Microsoft was concerned that “some of its Israel-based employees may not have been fully transparent about their knowledge of how Unit 8200 used Azure when questioned as part of the review.”
Similar issues arose at Google. The Intercept’s reporting on internal Google reports about project Nimbus found that there was ‘a deep collaboration between Google and the Israeli security state’, including a Classified Team within Google. Staffed with Israeli nationals with security clearances, it was designed to “receive information by [Israel] that cannot be shared with [Google].”
This was apparently a first for Google, with a Google internal report stating, according to the Intercept, “[t]he sensitivity of the information shared, and general working model for providing it to a government agency, is not currently provided to any country by GCP” (we interpret at Google Cloud Platform).
How many other government contracts now include this type of cooperation? And what other access do these staff have?
Big Tech needs to audit or exit
Microsoft’s review process is not yet done, and we hope more will become clear with the promised upcoming third-party report[2]. It’s essential that Microsoft explains whether it knows how its services are used government users.
All Big Tech cloud providers and all providers of AI tools must explain how they actually commit to their principles, and check compliance with their terms of service by governments who use their systems for surveillance.
Big Tech is increasingly offering governments immense storage, processing power, and tools. These amount to unprecedented capabilities to governments and companies who want to do surveillance and other forms of rights abuse.
The cases of Israel’s deals with Microsoft, Google, and now we presume Amazon can provide a launching pad to hold both governments and industry to account.
Cloud storage can be used to enable abuses, including mass exploitation of data. AI tools previously unimaginable but now available ‘off the shelf’ can be used to perpetrate atrocities. Any government or any party with vast amounts of personal data can do now this. As a result, new protections are required.
Even a few years ago this was unimaginable. Not only did the technology not exist, but much of the industry had declared that it was unwilling to get into military contracting. The increasing militarisation of our societies is now moving at an alarming pace. The world has changed quickly but that does not mean accountability is a thing of the past.
The increased investments into AI must now be seen in this frame as well: these tools are and will be used for abuses of human rights. They will be used for atrocities. And they are being used for genocide.
And unless industry changes its conduct, it won’t stop there. As the July 2025 report of UN Special Rapporteur on the situation of human rights in the Palestinian territories occupied since 1967 states, we must remember that this amassing of data doesn’t happen in a vacuum but rather is part of a broader government attack on the Palestinian people. That report documents Microsoft 'has been integrating its systems and civilian tech across the Israeli military since 2003, while acquiring Israeli cybersecurity and surveillance start-ups. (Google is also acquiring Israeli cybersecurity firm Wiz.) The UN Special Rapporteur also calls for a cessation of business activities linked with, contributing to, and causing human rights violations and international crimes against the Palestinian people.
Yet under the contractual arrangements, it’s unclear how these companies could monitor for these violations, even as they say they are concerned about abuses.
Microsoft and Google both claim to adhere to principles, ethics and ‘responsibility’. Microsoft goes so far as to say “[w]e will hold every decision, statement, and action to this [ethical] standard. This is non-negotiable.”
Before more atrocities and abuses follow, we urgently need rules and frameworks to govern this contracting. Industry needs to know how to create contracts that are auditable and can prevent abuses; or get out of the business of transacting with governments’ intelligence agencies and military.
Endnotes
- Although there is access controls, both virtual and physical, and cryptography as default with most cloud services, who holds what access, with which keys and if they can be compelled to cede it remains contentious as a service user.
- Microsoft’s review is being conducted by the law firm Covington & Burling. As a statement of interest, Privacy International receives pro bono services from Covington & Burling.