The new way police are surveilling you: Calculating your threat ‘score’

A new generation of technology has given local law enforcement officers in some parts of the US unprecedented power to peer into the lives of citizens. The police department of Frenso California uses a cutting-edge Real Time Crime Center that relies on software like Beware.

As officers respond to calls, Beware automatically runs the address. The program also scoures billions of data points, including arrest reports, property records, commercial databases, deep Web searches and the man’s social- media postings. It calculates a suspect's threat level as the highest of three color-coded scores: a bright red warning. How exactly the score is calculated is a trade secret. Similar centers have opened in New York, Houston and Seattle over the past decade. The powerful systems also have become flash points for civil libertarians and activists, who say they represent a troubling intrusion on privacy, have been deployed with little public oversight and have potential for abuse or error. Some say laws are needed to protect the public.

External Link to Story

https://www.washingtonpost.com/local/public-safety/the-new-way-police-are-surveilling-you-calculating-your-threat-score/2016/01/10/e42bccac-8e15-11e5-baf4-bdf37355da0c_story.html

What is Privacy International calling for?

Personalisation, persuasion, decisions and manipulation

The data that is observed, derived or predicted from our behaviour is increasingly used to automatically rank, score, and evaluate people. These derived or inferred data are increasingly used to make consequential decisions through ever more advanced processing techniques. In the future people will be scored in all aspects of their lives, societies will be managed invisibly, and human behaviour will be under the control of the few and the powerful.

Profiling makes it possible for highly sensitive details to be inferred or predicted from seemingly uninteresting data, producing derived, inferred or predicted data about people. As a result, it is possible to gain insight into someone’s presumed interests, identities, attributes or qualities without their knowledge or participation.

Such detailed and comprehensive profiles may or may not be accurate or fair. However, increasingly such profiles are being used to make or inform consequential decisions, from finance to policing, to the news users are exposed to or the advertisement they see. These decisions can be taken with varying degrees of human intervention and automation.

In increasingly connected spaces, our presumed interests and identities also shape the world around us. Real-time personalisation gears information towards an individual’s presumed interests. Such automated decisions can even be based on someone’s predicted vulnerability to persuasion or their inferred purchasing power.

Automated decisions about individuals or the environment they are exposed to offer unprecedented capabilities to nudge, modify or manipulate behaviour. They also run risk of creating novel forms of discrimination or unfairness. Since these systems are often highly complex, proprietary and opaque, it can be difficult for people to know where they stand or how to seek redress.

Principle 8. We should know all our data and profiles

Individuals need to have full insight into their profiles. This includes full access to derived, inferred and predicted data about them.

Principle 9. We may challenge consequential decisions

Individuals should be able to know about, understand, question and challenge consequential decisions that are made about them and their environment. This means that controllers too should have an insight into and control over this processing.