Blurring the Line: How Militarisation of Tech is Reshaping our Town Squares Introduction

Military tech is seeping into our daily lives, raising urgent questions about who controls our civic life and shapes our future.

Key findings
  • Big Tech and Defence Tech industries are building a new generation of tech for military and civil uses alike.
  • Defence-driven systems are seeping into everyday life with devastating consequences for our rights and freedoms.
  • These systems enable continuous monitoring and data collection. They bring with them predictions and biases that affect our rights and freedoms.
  • Companies are blurring the lines between military and civil technologies.
  • What happens when these firms and tech shape our town squares?
Long Read

We are at a war footing as we enter an era where the tech world is increasingly defined by conflict. Innovation has never been driven solely by social needs, market forces or the common good. Military imperatives have periodically played central roles in steering the development of new generations of technologies. For instance, the origins of the internet can be traced back to defence research and initiatives like the ARPANET.

Now we are witnessing a significant shift: states and corporations are harnessing technology to advance foreign policy agendas and to assert geopolitical dominance. At the same time, the line between ‘defence’ tech and ‘civilian’ tech is blurring. The same companies building tools of war are also being entrusted with civilian infrastructure and tasked with delivering public services. This is in stark contrast to the last thirty years of tech innovation based on open research and systems, global access and trade, and Tech Industry focus on consumers and communities.

We Have Seen This Change Before

Techno-solutionism and the rushed adoption of powerful new systems without adequate oversight are not new phenomena. In the aftermath of 9/11, there was an explosion of surveillance technologies, advancing mass monitoring under the guise of national security and public safety. Many of these tools, originally developed for countering terrorism, have since become deeply embedded in everyday governance with long-term consequences for privacy and human rights.

Today, we are witnessing a similar shift, albeit with different consequences. A new ‘defence tech’ industry is expanding rapidly, while governments are pouring vast resources into military technologies, moving to use commercial infrastructure for national security, and shaping deployment priorities with little public debate or regulatory scrutiny.

What Is the Problem?

The most pressing concern is that technologies being incubated within military contexts is no longer confined to the battlefield, nor are they governed by traditional norms of warfare. As defence-driven technologies seep into everyday life, they raise three interlocking risks that threaten the integrity of our societies:

  1. Continuous Monitoring and Data Harvesting

In today’s landscape, waging war involves immense innovations in data collection and analysis. AI-powered surveillance and targeting systems go beyond simply classifying objects or individuals: they are designed to identify, generate, and sometimes counter new threats. By analysing vast datasets for behavioural patterns or other indicators deemed suspicious, these systems designate individuals as potential threats and enable responses to counter them.

This means that, by design, such systems cannot function without extensive surveillance infrastructures that continuously monitor entire populations. Only through this constant surveillance can they identify deviations from what is labelled “normal” behavior. But normal is not a neutral benchmark — it is defined by those in power, and can be changed rapidly according to the motives of those in power. These systems, trained to enforce shifting definitions of order, operate without ethical brakes, turning our daily civilian lives into targets of suspicion. They subject entire societies to scrutiny and control.

Recent developments highlight the dangers of this model. In Gaza, Israel reportedly used an AI system called Lavender to generate thousands of targeting recommendations based on a vast database of individuals. At one stage, the system flagged up to 37,000 people as potential targets due to perceived links with Hamas. These links were often based on broad and opaque criteria, which expanded or contracted depending on how the system was trained to interpret these relations. To find these targets, the entire population was placed under scrutiny. The result was a system optimised not for precision or accountability, but for scale and speed, maximising the quantity of data processed and targets generated.

A similar logic underpins the rise of ‘deep sensing’, a new generation of military AI designed to gather, integrate, and analyse diverse data streams in real time to produce dynamic threat assessments. These systems draw from satellite imagery, drone footage, biometric data, and even social media activity, harvesting data across civilian and military environments alike. The goal is to create a “live” picture of the battlefield, but the implications are broader: entire populations risk being treated as data sources, often without their knowledge. The blending of civilian data into military systems further erodes the boundary between everyday life and warfare, raising profound questions around surveillance, proportionality, and the repurposing of civilian infrastructures for military gain.

  1. Predictability and Biases Define our Lives

Machine-learning models often function as black boxes, making decisions in ways that even their creators struggle to explain. As these systems become more central to warfare, conflict risks being shaped by the logic of past battles, reinforcing cycles of violence rather than fostering accountability. For our daily lives, this means that these systems may replicate and amplify biases and mistakes, locking entire societies into patterns of discriminatory targeting and exclusion with little possibility for redress.

This concern is closely tied to the idea that the training of these AI systems requires vast amounts of data, from diverse sources, often necessitating the blanket surveillance of entire populations. But how this data is analysed — and the conclusions drawn — is just as important. For instance, an investigation by The Guardian revealed that Israel’s military surveillance agency developed an advanced AI tool, comparable to ChatGPT, built from an extensive trove of intercepted Palestinian communications. However, experts and human rights advocates warn that such systems are prone to bias and errors, and their opaque decision-making processes make it difficult to understand how AI-generated conclusions are reached.

A further complication arises from what is called “bias in use”, which occurs when AI decision-support systems interact with human users in specific contexts, e.g. military settings, and when the same tools are transferred to contexts they were not originally designed for, e.g. civilian environments. This type of bias combines preexisting technical biases embedded in the system with new biases introduced through human interpretation and value judgments. Moreover, these systems can reinforce and amplify biases over time by continuously learning from their interactions, potentially increasing the number of individuals flagged as threats based on flawed patterns. This self-reinforcing feedback loop is difficult to monitor and predict, raising critical questions about who oversees these systems and how accountability is maintained.

Without transparency or accountability, these systems will inevitably make decisions and draw conclusions that won’t be checked. The key question is: as these technologies spread into our everyday lives, who is being classified as suspicious or dangerous? And will these biases disproportionately target vulnerable communities, further entrenching existing patterns of discrimination?

  1. Big Tech’s Deepening Entanglement in Defence

The spread of these systems into our daily lives is no longer hypothetical and it comes both ways. The same defence-tech companies developing surveillance tools, processing sensitive data, and their tools are used to define what counts as “normal” behaviour in conflict zones are now playing central roles in our day-to-day life.

Palantir, a data analytics and software company known for its work with intelligence agencies and the military sector has partnered with the World Food Programme to “transform” humanitarian aid delivery. It is also embedded in social security systems, for example, through a partnership with the UK healthcare service aimed at centralising patient data and enabling large-scale analysis of health trends across the United Kingdom.

At the same time, tech giants are increasingly involved in national and global security efforts, with Meta, for example, developing open-source AI tools to support US global security objectives.

It’s clear: defence contractors are moving into commercial markets, while big tech firms are looking to secure defence contracts. This convergence creates a dangerous overlap – where military logic and commercial incentives jointly shape the technologies and companies that govern our everyday lives.

We know Big Tech is seeking to dominate again in this new world. But this raises a fundamental problem: have these companies become so embedded in our societies that they are now too big to regulate? Have they become too critical to national security to fail? Too intertwined with the state to be held accountable?

War Reshaping of Our Town Squares Is Threatening Our Rights and Freedoms

This shift has profound implications for our daily lives. We urgently need a public debate about what is acceptable in our societies. The governance of technology’s role in defense must be robust, transparent, and accountable. This is not merely a struggle for privacy, it is a fight for the kind of society we want to live in: one where technology serves people, not entrenched power.

At PI, we are deeply concerned by these developments and are actively grappling with some urgent questions: What happens when military actors shape the direction/control of public spaces? What values are embedded in technologies originally designed for war, then repurposed for governance, policing, or social protection? And if companies gain dominance in civilian markets because of their wartime utility, how does that reshape power, accountability, and governance in times of peace?