Nowhere to Hide? Privacy Risks and Policy Implications of AI Geolocation

Discover how today’s AI can reveal your location from a single photo — and what that means for your privacy

Key findings
  • We demonstrate the geolocation capabilities of modern Vision-Language Models (VLMs), presenting new research on how these systems can infer location from images
  • We outline key privacy risks, including identification, tracking, data leakage, gendered threats, commercial misuse, and dual‑use concerns
  • We highlight relevant legal and regulatory frameworks, identifying accountability challenges that require urgent attention for an effective governance of VLMs
Report
Report cover showing a couple on holidays being geolocated

One of the most surprising — and concerning — capabilities of the newest new Artificial Intelligence (AI) systems is their ability to infer geographic location from images. Vision‑Language Models (VLMs) can now determine where in the world any given photo is taken with striking speed and accuracy. Most people are unaware that widely accessible AI tools can identify the location of their personal photos, even when Global Positioning System (GPS) metadata has been removed.

Inferring location from images without GPS data may potentially support beneficial activities, such as robotics development or investigative journalism – but they are not risk-free. VLMs’ capabilites create serious risks for privacy and other human rights. They can transform an ordinary photograph into sensitive personal information. Immediate risks include covert surveillance, doxxing, discriminatory policing, and profiling. Beyond individual harms, geolocation capabilities may have chilling effects on freedoms of expression and assembly. They might also enable social media platforms or other companies to monetise inferred location data.

This report is one of the results of the project Assessing and Mitigating Privacy Risks of Vision-Language Models in Image-based Geolocation Systems (PRIV-LOC), funded by the UK AI Security Institute (AISI). The project was a collaboration between researchers at the University of Southampton, University College London, Queensland University of Technology, and Privacy International. Additional support came from the UKRI Responsible AI programme and the UKRI Generative AI Hub.

In this report, we focus on the risks. We present our latest research on VLM geolocation performance before examining the range of harms these capabilities may produce, whether used by experts or ordinary members of the public. Despite growing evidence of these capabilities, significant uncertainties remain. VLMs infer location through visual cues in images, but especially in closed‑source, ‘black box’ systems, the underlying processes are difficult to understand. What is clear, however, is that these models already pose both immediate and long‑term concerns.

In this report, we map key privacy risks arising from VLMs. These include direct identification and re‑identification; tracking and location inference; training data and model release risks; memorisation and data leakage; membership inference; model inversion and reconstruction; authoritarian surveillance; gendered threats; commercial exploitation; structural data biases; psychological and social impacts; and dual‑use risks. We conclude with an assessment of the legal and regulatory landscape and highlight accountability gaps relevant to VLM oversight. Without harmonised approaches to risk assessment, mitigation, and governance, people who share images online may increasingly find themselves with nowhere to hide.