AI

09 Apr 2020
The risk detection company Dataminr has created an AI system that analyses social media posts to predict the next hotspots for COVID-19 outbreaks. The company claims it successfully predicted spikes seven to 13 days before they occurred - in the UK, in London, Hertfordshire, Essex, and Kent, and in
12 Apr 2020
Palantir and the British AI start-up Faculty are data-mining large volumes of confidential UK patient information to consolidate government databases and build predictive computer models under contract to NHSx, the digital transformation arm of the UK's National Health Service. NHSx said the goal is
17 Mar 2020
Russia has set up a coronavirus information centre to to monitor social media for misinformation about the coronavirus and spot empty supermarket shelves using a combination of surveillance cameras and AI. The centre also has a database of contacts and places of work for 95% of those under mandatory
19 Mar 2020
Hakob Arshakyan, Armenia's minister of the high technology industry, has convened a research group comprising experts in IT and AI has been convened to collect and analyse data on the spread of coronavirus, compare it with the data collected by international partners, and develop forecasts. The
In November 2018, researchers at Sweden's University of Lund, the US's Worcester Polytechnic Institute, and the UK's Oxford University announced that in August the US State Department had begun using a software program they had designed that uses AI to find the best match for a refugee's needs
15 Jun 2018
In June 2018, a panel set up to examine the partnerships between Alphabet's DeepMind and the UK's NHS express concern that the revenue-less AI subsidiary would eventually have to prove its value to its parent. Panel chair Julian Huppert said DeepMind should commit to a business model, either non
05 Nov 2018
Shortly before the November 2018 US midterm elections, the Center for Media and Democracy uncovered documents showing that the multi-billionaire Koch brothers have developed detailed personality profiles on 89 percent of the US population with the goal of using them to launch a private propaganda
21 Sep 2018
In 2018 a report from the Royal United Services Institute found that UK police were testing automated facial recognition, crime location prediction, and decision-making systems but offering little transparency in evaluating them. An automated facial recognition system trialled by the South Wales
30 Apr 2018
In 2018 industry insiders revealed that the gambling industry was increasingly turning to data analytics and AI to personalise their services and predict and manipulate consumer response in order to keep gamblers hooked. Based on profiles assembled by examining every click, page view, and
14 May 2018
Three months after the 2018 discovery that Google was working on Project Maven, a military pilot program intended to speed up analysis of drone footage by automating classification of images of people and objects, dozens of Google employees resigned in protest. Among their complaints: Google
10 Sep 2018
In September 2018, AI Now co-founder Meredith Whittaker sounded the alarm about the potential for abuse of the convergence of neuroscience, human enhancement, and AI in the form of brain-computer interfaces. Part of Whittaker's concern was that the only companies with the computational power
15 Oct 2018
In March 2018 the Palo Alto startup Mindstrong Health, founded by three doctors, began clinical tests of an app that uses patients' interactions with their smartphones to monitor their mental state. The app, which is being tested on people with serious illness, measures the way patients swipe, tap
26 Jul 2018
In 2018, the chair of the London Assembly's police and crime committee called on London's mayor to cut the budget of the Mayor's Office for Policing and Crime, which provides oversight, in order to pay for AI systems. The intention was that the efficiencies of adopting AI would free up officers'
21 Sep 2018
In 2017, the head of China’s security and intelligence systems, Meng Jianzhu, called on security forces to break down barriers to data sharing in order to use AI and cloud computing to find patterns that could predict and prevent terrorist attacks. Meng also called for increased integration of the
17 May 2018
In May 2018, US Immigration and Customs Enforcement abandoned the development of machine learning software intended to mine Facebook, Twitter, and the open Internet to identify terrorists. The software, announced in the summer of 2017, had been a key element of president Donald Trump's "extreme
27 Sep 2018
Canada began experiments introducing automated decision-making algorithms into its immigration systems to support evaluation of some of the country's immigrant and visitor applications in 2014. In a 2018 study, Citizen Lab and NewsDeeply found that AI's use was expanding despite concerns about bias
15 May 2018
In 2011, the US Department of Homeland Security funded research into a virtual border agent kiosk called AVATAR, for Automated Virtual Agent for Truth Assessments in Real-Time, and tested it at the US-Mexico border on low-risk travellers who volunteered to participate. In the following years, the
31 Oct 2018
In 2018, the EU announced iBorderCtrl, a six-month pilot led by the Hungarian National Police to install an automated lie detection test at four border crossing points in Hungary, Latvia, and Greece. The system uses an animated AI border agent that records travellers' faces while asking questions
18 May 2018
In May 2018, Google announced an AI system to carry out tasks such as scheduling appointments over the phone using natural language. A Duplex user wanting to make a restaurant booking, for example, could hand the task off to Duplex, which would make the phone call and negotiate times and numbers. In
28 Mar 2018
In March 2018, Facebook announced it was scrapping plans to show off new home products at its developer conference in May, in part because revelations about the use of internal advertising tools by Cambridge Analytica have angered the public. The new products were expected to include connected
Designed for use by border guards, Unisys' LineSight software uses advanced data analytics and machine learning to help border guards decide whether to inspect travellers more closely before admitting them into their country. Unisys says the software assesses each traveller's risk beginning with the
In 2012, Durham Constabulary, in partnership with computer science academics at Cambridge University, began developing the Harm Assessment Risk Tool (HART), an artificial intelligence system designed to predict whether suspects are at low, moderate, or high risk of committing further crimes in the
27 Feb 2018
Under a secret deal beginning in 2012, the data mining company Palantir provided software to a New Orleans Police Department programme that used a variety of data such as ties to gang members, criminal histories, and social media to predict the likelihood that individuals would commit acts of
20 Feb 2018
In 2018, pending agreement from its Institutional Review Board, the University of St Thomas in Minnesota will trial sentiment analysis software in the classroom in order to test the software, which relies on analysing the expressions on students' faces captured by a high-resolution webcam
In a study of COMPAS, an algorithmic tool used in the US criminal justice system , Dartmouth College researchers Julia Dressel and Hany Farid found that the algorithm did no better than volunteers recruited via a crowdsourcing site. COMPAS, a proprietary risk assessment algorithm developed by
The first signs of the combination of AI and surveillance are beginning to emerge. In December 2017, the digital surveillance manufacturer IC Realtime, launched a web and app platform named Ella that uses AI to analyse video feeds and make them instantly searchable - like a Google for CCTV. Company
02 Jan 2018
In February 2018 the Canadian government announced a three-month pilot partnership with the artificial intelligence company Advanced Symbolics to monitor social media posts with a view to predicting rises in regional suicide risk. Advanced Symbolics will look for trends by analysing posts from 160
02 Jan 2018
EU antitrust regulators are studying how companies gather and use big data with a view to understanding how access to data may close off the market to smaller, newer competitors. Among the companies being scrutinised are the obvious technology companies, such as Google and Facebook, and less obvious
11 Jan 2018
In 2017, a study claimed to have shown that artificial intelligence can infer sexual orientation from facial images, reviving the kinds of claims made in the 19th century about inferring character from outer appearance. Despite widespread complaints and criticisms, the study, by Michal Kosinski and
04 Sep 2017
The UK Information Commissioner's Office has published policy guidelines for big data, artificial intelligence, machine learning and their interaction with data protection law. Applying data protection principles becomes more complex when using these techniques. The volume of data, the ways it's
17 Oct 2017
A mistake in Facebook's machine translation service led to the arrest and questioning of a Palestinian man by Israeli police. The man, a construction worker on the West Bank, posted a picture of himself leaning against a bulldozer like those that have been used in hit-and-run terrorist attacks, with
A paper by Michael Veale (UCL) and Reuben Binns (Oxford), "Fairer Machine Learning in the Real World: Mitigating Discrimination Without Collecting Sensitive Data", proposes three potential approaches to deal with hidden bias and unfairness in algorithmic machine learning systems. Often, the cause is
04 Oct 2017
In 2017, after protests from children's health and privacy advocates, Mattel cancelled its planned child-focused "Aristotle" smart hub. Aristotle was designed to adapt to and learn about the child as they grew while controlling devices from night lights to homework aids. However, Aristotle was only
20 May 2015
In 2015, a newly launched image recognition function built into Yahoo's Flickr image hosting site automatically tagged images of black people with tags such as "ape" and "animal", and also tagged images of concentration camps with "sport" or "jungle gym". The company responded to user complaints by
04 Feb 2013
In 2013, Harvard professor Latanya Sweeney found that racial discrimination pervades online advertising delivery. In a study, she found that searches on black-identifying names such as Revon, Lakisha, and Darnell are 25% more likely to be served with an ad from Instant Checkmate offering a
24 Jan 2014
In 2014, DataKind sent two volunteers to work with GiveDirectly, an organisation that makes cash donations to poor households in Kenya and Uganda. In order to better identify villages with households that are in need, the volunteers developed an algorithm that classified village roofs in satellite
27 Jun 2015
A 2015 study by The Learning Curve found that although 71% of parents believe technology has improved their child's education, 79% were worried about the privacy and security of their child's data, and 75% were worried that advertisers had access to that data. At issue is the privacy and security
01 Sep 2016
Automated systems such as the personality test developed by Massachusetts-based workforce management company Kronos are increasingly used by large companies to screen job applicants. To avoid falling foul of regulations prohibiting discrimination against those with mental illness, often the
01 Jun 2016
The price of using voice search is that Google records many of the conversations that take place in their presence. Users wishing to understand what Google has captured can do so by accessing the portal the company introduced in 2015. Their personal history pages on the site include both a page
24 Nov 2016
In 2016 researchers in China claimed an experimental algorithm could correctly identify criminals based on images of their faces 89% of the time. The research involved training an algorithm on 90% of a dataset of 1,856 photos of Chinese males between 18 and 55 with no facial hair or markings. Among
27 Sep 2016
In 2016 researchers at the University of Texas at Austin and Cornell University demonstrated that a neural network trained on image datasets can successfully identify faces and objects that have been blurred, pixellated, or obscured by the P3 privacy system. In some cases, the algorithm performed
22 Jun 2015
In 2015, Facebook's AI lab announced that its researchers had devised an experimental algorithm that could recognise people in photographs even when their faces are hidden or turned away. The researchers trained a sophisticated neural network on a dataset of 40,000 photographs taken from Flickr
05 Sep 2016
In September 2016, an algorithm assigned to pick the winners of a beauty contest examined selfies sent in by 600,000 entrants from India, China, the US, and all over Africa, and selected 44 finalists, almost all of whom were white. Of the six non-white finalists, all were Asian and only one had
23 May 2016
Computer programs that perform risk assessments of crime suspects are increasingly common in American courtrooms, and are used at every stage of the criminal justice systems to determine who may be set free or granted parole, and the size of the bond they must pay. By 2016, the results of these
21 Sep 2016
In 2016, researchers at MIT's Computer Science and Artificial Intelligence Laboratory developed a new device that uses wireless signals that measure heartbeats by bouncing off a person's body. The researchers claim that this system is 87% accurate in recognising joy, pleasure, sadness, or anger
23 Mar 2016
In 2016, the Big Data lab at the Chinese search engine company Baidu published a study of an algorithm it had developed that it claimed could predict crowd formation and suggested it could be used to warn authorities and individuals of public safety threats stemming from unusually large crowds. The
03 May 2016
In 2012, London Royal Free, Barnet, and Chase Farm hospitals agreed to provide Google's DeepMind subsidiary with access to an estimated 1.6 million NHS patient records, including full names and medical histories. The company claimed the information, which would remain encrypted so that employees
A new examination of documents detailing the US National Security Agency's SKYNET programme shows that SKYNET carries out mass surveillance of Pakistan's mobile phone network and then uses a machine learning algorithm to score each of its 55 million users to rate their likelihood of being a
03 Apr 2018
In 2016, researchers discovered that the personalisation built into online advertising platforms such as Facebook is making it easy to invisibly bypass anti-discrimination laws regarding housing and employment. Under the US Fair Housing Act, it would be illegal for ads to explicitly state a