New police technologies assign individual "threat scores"

A new generation of technology has given local law enforcement officers in some parts of the US unprecedented power to peer into the lives of citizens. In Fresno, California, the police department's $600,000 Real Time Crime Center is providing a model for other such centres that have opened in New York, Houston, and Seattle over the decade between 2006 and 2016. The group of technologies used in these centres includes ShotSpotter, which uses microphones around the city to triangulate the location of gun shots; a private database of more than 2 billion nationwise licence plates and locations; Media Sonar software, which monitors individuals on social media and also scans for threats to schools and gang-related hashtags; and feeds from 200 police cameras across the city, plus potentially 800 more feeds from city schools and traffic cameras; and soon perhaps 400 more from officers' body cams and local business surveillance systems. 

Sitting atop these sources of information is software called Beware. When a call comes in, Beware runs the address and uses various types of publicly available data to generate a colour-coded threat score for each resident. How the software works is unknown, as its publisher, Intrado, considers that information a trade secret. 

The software is controversial among civil libertarians, who view it as a troubling intrusion on privacy. Other concerns include the lack of public oversight and the potential for abuse and error. After a contentious Fresno City Council hearing on Beward, the city's police department began working with Intrado to turn off the colour-coded rating system and possibly the social media monitoring. 

https://www.washingtonpost.com/local/public-safety/the-new-way-police-are-surveilling-you-calculating-your-threat-score/2016/01/10/e42bccac-8e15-11e5-baf4-bdf37355da0c_story.html

Writer: Justin Jouvenal
Publication: Washington Post

What is Privacy International calling for?

People must know

People must be able to know what data is being generated by devices, the networks and platforms we use, and the infrastructure within which devices become embedded.  People should be able to know and ultimately determine the manner of processing.

Control over intelligence

Individuals should have control over the data generated about their activities, conduct, devices, and interactions, and be able to determine who is gaining this intelligence and how it is to be used.

Identities under our control

Individuals must be able to selectively disclose their identity, generate new identities, pseudonyms, and/or remain anonymous. 

We should know all our data and profiles

Individuals need to have full insight into their profiles. This includes full access to derived, inferred and predicted data about them.

We may challenge consequential decisions

Individuals should be able to know about, understand, question and challenge consequential decisions that are made about them and their environment. This means that controllers too should have an insight into and control over this processing.