Predictive policing

22 Feb 2020
The Hong Kong Department of Health has asked the police to deploy its computerised Major Incident Investigation and Disaster Support System in order to trace the contacts of patients infected by the novel coronavirus. The request for the system, which was used during the SARS epidemic in 2003, came
In September 2018, at least five local English councils had developed or implemented a predictive analytics system incorporating the data of at least 377,000 people with the intention of preventing child abuse. Advocates of these systems argue that they help councils struggling under budget cuts to
21 Sep 2018
In 2018 a report from the Royal United Services Institute found that UK police were testing automated facial recognition, crime location prediction, and decision-making systems but offering little transparency in evaluating them. An automated facial recognition system trialled by the South Wales
26 Jul 2018
In 2018, the chair of the London Assembly's police and crime committee called on London's mayor to cut the budget of the Mayor's Office for Policing and Crime, which provides oversight, in order to pay for AI systems. The intention was that the efficiencies of adopting AI would free up officers'
08 May 2018
In 2018, documents obtained by a public records request revealed that the Los Angeles Police Department required its analysts to maintain a minimum of a dozen ongoing surveillance targets identified using Palantir software and a "probable offender" formula based on an LAPD points-based predictive
21 Sep 2018
In 2017, the head of China’s security and intelligence systems, Meng Jianzhu, called on security forces to break down barriers to data sharing in order to use AI and cloud computing to find patterns that could predict and prevent terrorist attacks. Meng also called for increased integration of the
17 May 2018
In May 2018, US Immigration and Customs Enforcement abandoned the development of machine learning software intended to mine Facebook, Twitter, and the open Internet to identify terrorists. The software, announced in the summer of 2017, had been a key element of president Donald Trump's "extreme
17 Sep 2018
In 2018, at least five British local authorities began developing systems intended to use predictive analytics to identify families needing attention from child services on the basis that algorithmic profiling could help them target their scarce resources more efficiently. Data about at least 377
Designed for use by border guards, Unisys' LineSight software uses advanced data analytics and machine learning to help border guards decide whether to inspect travellers more closely before admitting them into their country. Unisys says the software assesses each traveller's risk beginning with the
In 2012, Durham Constabulary, in partnership with computer science academics at Cambridge University, began developing the Harm Assessment Risk Tool (HART), an artificial intelligence system designed to predict whether suspects are at low, moderate, or high risk of committing further crimes in the
27 Feb 2018
Under a secret deal beginning in 2012, the data mining company Palantir provided software to a New Orleans Police Department programme that used a variety of data such as ties to gang members, criminal histories, and social media to predict the likelihood that individuals would commit acts of
In a draft January 2018 report obtained by Foreign Policy and produced at the request of US Customs and Border Protection Commissioner Kevin McAleenan, the Department of Homeland Security called for continuous vetting of Sunni Muslim immigrants deemed to have "at-risk" profiles. Based on studying 25
In the remote western city Xinjiang, the Chinese government is using new technology and humans to monitor every aspect of citizens' lives. China, which has gradually increased restrictions in the region over the last ten years in response to unrest and violent attacks, blames the need for these
In 2013, in collaboration with the Illinois Institute of Technology, the Chicago Police Department set up the Strategic Subjects List, an effort to identify the most likely victims and perpetrators of gun violence. In 2016, a report published by the RAND Corporation found that the project, which had
18 Oct 2016
A 2016 report, "The Perpetual Lineup", from the Center for Privacy and Technology at Georgetown University's law school based on records from dozens of US police departments found that African-Americans are more likely to have their images captured, analysed, and reviewed during computerised
10 Jan 2016
A new generation of technology has given local law enforcement officers in some parts of the US unprecedented power to peer into the lives of citizens. In Fresno, California, the police department's $600,000 Real Time Crime Center is providing a model for other such centres that have opened in New
24 Nov 2016
In 2016 researchers in China claimed an experimental algorithm could correctly identify criminals based on images of their faces 89% of the time. The research involved training an algorithm on 90% of a dataset of 1,856 photos of Chinese males between 18 and 55 with no facial hair or markings. Among
A new generation of technology has given local law enforcement officers in some parts of the US unprecedented power to peer into the lives of citizens. The police department of Frenso California uses a cutting-edge Real Time Crime Center that relies on software like Beware. As officers respond to
10 Aug 2015
In May 2015, the US Department of Justice and the FBI submitted a declaration to an Oregon federal judge stating that the US government's no-fly lists and broader watchlisting system relied on predictive judgements of individuals rather than records of actual offences. The documents were filed as
23 Mar 2016
In 2016, the Big Data lab at the Chinese search engine company Baidu published a study of an algorithm it had developed that it claimed could predict crowd formation and suggested it could be used to warn authorities and individuals of public safety threats stemming from unusually large crowds. The
In 2015, IBM began testing its i2 Enterprise Insight Analysis software to see if it could pick out terrorists, distinguish genuine refugees from imposters carrying fake passports, and perhaps predict bomb attacks. Using a scoring system based on several data sources and a hypothetical scenario, IBM