You are here

Case Study: Your Tweet Can and Will Be Used Against You

Date: 
30 August 2017

Police and security services are increasingly outsourcing intelligence collection to third-party companies which are assigning threat scores and making predictions about who we are.

The rapid expansion of social media, connected devices, street cameras, autonomous cars, and other new technologies has resulted in a parallel boom of tools and software which aim to make sense of the vast amount of data generated from our increased connection. Police and security services see this data as an untapped goldmine of information which give intimate access to the minds of an individual, group, or a population.

As a result, the police have the ability to enter and monitor our lives on an unprecedented scale. As our online lives increasingly blend with our lives offline, the police are able to enter our personal lives and monitor our social interactions on social media, monitor public and private places with drones, collect our license plates using ANPR, capture our images on CCTV and body worn cameras, and use facial recognition technology. Much of this is invisible to the human eye, undermining comprehension of the seriousness of this intrusion. If we could physically see what is happening, there would be outrage, loud and clear.

The police aren’t doing this alone. Throughout the world, the police not only purchase software and hardware that enables highly intrusive means of gaining access into our lives. In a world where police, security agencies, and private companies have increased ability to collect more and more data about us, the police outsource collection and data analysis to third parties.

 

What happened?

Information that we knowingly share on social media, such as posts, photos, and birthdays, as well as data that we unknowingly share on social media, such as the time of day we are active on the platforms, our location, and information (which can revel our mood, such as if we write in all upper or lower case) is all of value to the police and security services. The police and security services gain access to this data by collecting information in the public domain, as well as by accessing commercially available data in public and private databases via third party data brokers.

A slew of third party companies offer police and security services tools to pull information from social media, data broker databases, and public records, into a centralised hub, use software to organise and analyse the data, and turn it into actionable intelligence. This intelligence could be information about the likelihood of a suspect becoming violent, the movement of activists, or the likelihood someone is a terrorist.

In 2015, the US Justice Department and Federal Bureau of Investigation admitted that the watch-listing system and no-fly lists in the US was based on “predictive assessments about potential threats”. People who had never been charged or convicted, of a violent crime were being flagged by the system, with little ability or opportunity to understand or question what had prompted the system to flag them. Because the machine outputs the decision, without giving insight into why the decision was made, the systems’ human operators are not able to offer further explanation of the decision. Such a decision affects a person’s ability to move freely. Individuals are treated as guilty, reversing the burden of proof.

The Fresno Police Department in Fresno, California use a programme which looks at billions of data points from social media, arrest reports, property records, police body-worn camera footage, and more, to calculate threat scores of suspects. The police consulted the system after receiving an emergency call about potential domestic violence. Similar systems, which allows police to consult real-time information, are opening across the US, including in New York and Texas.

Seeing opportunity in the migration crisis, IBM is developing software it says will be able to predict if a migrant is an innocent refugee or a terrorist. The company is creating a system which brings together several data sources to analyse the probability of a passport’s authenticity and more. Some of the sources the system pulls data from include “the Dark Web […] and data related to the black market for passports”, as well as social media and phone metadata. The implications of such a system could be catastrophic for those who are already vulnerable.

 

What’s the problem?

Privacy invasion and chilling effects

As the surveillance of our lives by the police becomes more widespread, as well as more publicly understood, the way we use and interact on the web, and as a consequence, the way we interact offline, will become undermined.  As our online movements, as well as how we move around online, are swept up and analysing by governments, police, and companies, we lose the ability to autonomously explore, interact, and organise.

Fostering a sense of omnipotent surveillance harkens back to 1970, when FBI headquarters sent out a memo urging agents to increase their interviews with activists to “enhance the paranoia endemic in these circles and will further serve to get the point across there is an F.B.I. agent behind every mailbox”.

 

Opacity

The wide and varied forms of data collected by police, governments, and shadowy third-party data brokers, combined with increasingly reliance on opaque software for collection and analysis, and the increasing use of secret algorithms and/or complex machine learning, have resulted in powerful decision-making systems that are nearly impossible to challenge. As a result, we, as a society and as individuals, are left unable to understand how or why decisions about us are being made. Even knowing when a decision was made is becoming difficult.

 

Discrimination

Data that is used in these systems can never perfectly represent reality, and is therefore always partial, meaning that the decisions machines make are necessarily imperfect. Police and governments increasing their reliance on these systems while simultaneously lacking the technical expertise to understand the consequences of depending on decisions made by machines using flawed, inaccurate, partial, and biased data, is problematic and dangerous. Furthermore, without public understanding of what data is put into systems or how the machines make a decision, police, governments, individuals, and societies will have limited ability to understand the consequences of large-scale dependence on these systems.

 

The role of corporations

Despite clear risks associated with the use of exiting potentially biased datasets, companies such as IBM, Microsoft, Cisco, Oracle, and Palantir offer platforms which allow police to navigate through large datasets to facilitate their investigations and responses. Sold as a solution to the problem of analysing massive amounts of available data, police and security agencies have begun to trust and depend on the decisions that the software outputs. Yet still we lack a public debate about what it means to use this data, what is means to challenge it, and how the most vulnerable in society are disproportionately impacted it.

 

 

What’s the solution?

Public understanding of both the sheer amount of data available, as well as how it can be used against us, is limited. The risks inherent to mass data collection and data exploitation highlighted above demonstrate why we as a society should demand the creation of and adherence to strict regulatory and ethical frameworks that address the new reality and which limit the extent to which police and governments can collect, analyse, and use our data.

Data is part of our identity, and even non-personal data can reveal intimate details about us and our lives. To ensure that we live in a society where all people are treated as citizens and not as suspects, the data police collect should be necessary and proportionate and only be stored when strictly necessary. We should be able to easily demand access to data the police hold about us, including inferences made from our data.  We should also be able to demand that data held about us is corrected when it is inaccurate.

Adoption of technology before sufficient regulation is in place undermines the public’s expectation of policing by consent.

Data processing technology is not an answer to all policing questions and should be considered with scepticism before adoption.

The shifting nature of information in the public domain should not undermine the right to privacy in the public domain.

Furthermore, the activities discussed in the article go against Privacy International’s forthcoming Data Exploitation Principles, specifically:

  1. Individuals should have insight into the data processing being undertaken on (and through) their devices.
  2. Data must be protected from access by persons who are not the user.
  3. Individuals have multiple and negotiated identities and as a result they must be able to curate their data and identities and selectively disclose, or be anonymous by default.