Search
Content type: Examples
12th August 2019
The US Department of Homeland Security awarded a $113 million contract to General Dynamics to carry out the Visa Lifecycle Vetting Initiative (VLVI), a renamed version of the Extreme Vetting Initiative and part of a larger effort called the National Vetting Enterprise. In May 2018, public outrage led the DHS to back away from a machine learning system that would monitor immigrants continuously; however, the reason it gave was that the technology to automate vetting did not yet exist. These…
Content type: Examples
12th August 2019
VeriPol, a system developed at the UK's Cardiff University, analyses the wording of victim statements in order to help police identify fake reports. By January 2019, VeriPol was in use by Spanish police, who said it helped them identify 64 false reports in one week and was successful in more than 80% of cases. The basic claim is that AI can find patterns that are common to false statements; among the giveaways experts say that false statements are likely to be shorter than genuine ones, focus…
Content type: Examples
12th August 2019
In October 2018, the Singapore-based startup LenddoEFL was one of a group of microfinance startups aimed at the developing world that used non-traditional types of data such as behavioural traits and smartphone habits for credit scoring. Lenddo's algorithm uses numerous data points, including the number of words a person uses in email subject lines, the percentage of photos in a smartphone's library that were taken with a front-facing camera, and whether they regularly use financial apps on…
Content type: Examples
12th August 2019
In November 2018, tests began of the €4.5 million iBorderCtrl project, which saw AI-powered lie detectors installed at airports in Hungary, Latvia, and Greece to question passengers travelling from outside the EU. The AI questioner was set to ask each passenger to confirm their name, age, and date of birth, and then query them about the purpose of their trip and who is paying for it. If the AI believes the person is lying, it is designed to change its tone of voice to become "more skeptical"…
Content type: Examples
12th August 2019
In November 2018, worried American parents wishing to check out prospective babysitters and dissatisfied with criminal background checks began paying $24.99 for a scan from the online service Predictim, which claimed to use "advanced artificial intelligence" to offer an automated risk rating. Predictim based its scores in part on Facebook, Twitter, and Instagram posts - applicants were required to share broad access to their accounts - and offered no explanation of how it reached its risk…
Content type: Examples
12th August 2019
In November 2018, researchers at Sweden's University of Lund, the US's Worcester Polytechnic Institute, and the UK's Oxford University announced that in August the US State Department had begun using a software program they had designed that uses AI to find the best match for a refugee's needs, including access to jobs, medical facilities, schools, and nearby migrants who speak the same language. Known as "Annie MOORE", refugees matched by the program were finding jobs within 90 days about a…
Content type: Examples
12th July 2019
In 2017, US Immigration & Customs Enforcement (ICE) announced that it would seek to use artificial intelligence to automatically evaluate the probability of a prospective immigrant “becoming a positively contributing member of society.” In a letter to acting Department of Homeland Security Secretary Elaine Duke, a group of 54 concerned computer scientists, engineers, mathematicians, and researchers objected to ICE’s proposal and demanded that ICE abandon this approach because it would be…