Micro-targeting refers to an increasingly popular online marketing strategy. By collecting people’s data and using it to segment/divide people into groups, companies and political parties are able to target different messages and content to different groups, mostly in the form of adverts.

That advert for a suitcase that follows you around the internet? You’ve been micro-targeted. That political party message appearing in your feed raising issues about your kid’s school? You’ve been micro-targeted.

This complex practice can be divided into the following phases:

  1. Data collection: Methods of collecting or obtaining personal data, including through hidden means such as trackers placed on the websites you visit or from open sources such as voter registries, social media or buying databases and profiles from other sources such as data brokers.
  2. Profiling: This involves dividing users into small groups or “segments” based on characteristics such as personality traits, interests, background or previous voting behaviour. 
  3. Personalisation: this involves designing personalised content for each segment.
  4. Targeting: Personalised content is distributed using online platforms to reach the targeted group with these tailor-made, targeted messages.

Micro-targeting is a technique deployed by commercial and political actors on online platforms, with consequences for consumers and voters alike.

There is no standard method of micro-targeting and there are different ways advertisers can execute each of the four stages described above.  Some advertisers may use personal data directly provided to them by the online user, such as information freely entered when creating an account or starting a subscription service. Others may use personal data or databases acquired/bought from third-parties, such as data brokers, in addition to the data they have themselves directly obtained. Or they could use a combination of databases to increase their insights into you and your life.

Depending on the platform where an advertiser chooses to display their ads, additional options may be available. For example, some social media platforms offer advertisers the opportunity to upload their own data for the platform to be able to generate an audience. Facebook, for instance, allows advertisers to target “look-alike” audiences – that is, audiences they have never interacted with before, but that “look like” the individuals that they have previously interacted with.

The end result of all this mixing and matching is that the amount of data held and used on and against individuals can be staggering. The granularity of this data makes it possible for political parties to divide potential voters into categories or segments, according to carefully selected criteria or scoring systems. A study of responses to data subject access requests submitted to dominant political parties in the UK revealed scoring systems reliant on variables ranging from ethnicity and mother tongue to enjoyment of tabloid newspapers.

What is the problem

The processes behind micro-targeting are extremely opaque and reliant on a hidden data ecosystem made up of hundreds of companies you have most likely never heard of or interacted with. Users are left with little information as to how they came to be targeted by a particular ad, and why. PI research has shown that tracing the journey of one’s own personal data is fraught with obstacles. When companies that you never interacted with come to hold your personal data, legitimate questions arise regarding the fairness and lawfulness of that data processing, and the impact on your right to privacy.

That’s just the beginning. Some forms of micro-targeting can be particularly insidious, such as psychographic targeting, which relies on personality and behaviour data and inferences. Some advertisers may use such data to build an intricate profile on a person’s perceived interests, values and vulnerabilities, and target ads accordingly. PI research revealed that popular websites about depression in France, Germany and the UK share user data with advertisers, data brokers and large tech companies, while some websites offering depression tests leak answers and test results to third parties.

Arguably, the ongoing lack of transparency surrounding micro-targeting is harm to privacy in and of itself. But micro-targeting in the online campaigning context can cause wider, societal harms. At best, it can create silos and echo chambers, thus polarising voters and ultimately distorting public debate and restricting the civic space. At worst, it can facilitate exclusion and discrimination. A manifestation of this is voter suppression. An investigation revealed that micro-targeting had been used to actively deter Black Americans from voting in the 2020 US election, with 3.5 million Black Americans being categorised as voters worth deterring from voting.

What is the solution

Increased user controls. When PI and ORG commissioned a YouGov poll following the 2019 UK General Election, we learned that most voters oppose the use of targeted ads during elections. Users should not be subjected to targeting methods that they object to. Online platforms should ensure that the default position is one where users are shielded from micro-targeting practices by way of opt-in controls, as opposed to opt-out mechanisms.

Heightened transparency. Added controls are however not a substitute for meaningful transparency. Users must be provided with substantially more information on the targeting methods used against them, the data the targeting was based on, where that data came from, and on what basis it was processed. Such information would enable users to discern the information they want to consume, better understand biases of advertisers, and better understand how they are being profiled and categorised by advertisers and online platforms.

Data protection enforcement. A well implemented and strongly enforced data protection law imposes significant limits on micro-targeting.

Further regulation. However, further regulation of data exploitation in online advertising by is required by data protection and electoral authorities alike, including new regulatory powers for electoral commissions and data protection authorities.

What is PI doing

PI is working to better understand and uncover the hidden data ecosystem behind micro-targeting practices, identify the actors involved, hold them to account where necessary, and ensure that users are provided with the transparency and agency they deserve: