We may challenge consequential decisions
Individuals should be able to know about, understand, question and challenge consequential decisions that are made about them and their environment. This means that controllers too should have an insight into and control over this processing.
The data that is observed, derived or predicted from our behaviour is increasingly used to automatically rank, score, and evaluate people. These derived or inferred data are increasingly used to make consequential decisions through ever more advanced processing techniques. In the future people will be scored in all aspects of their lives, societies will be managed invisibly, and human behaviour will be under the control of the few and the powerful.
What is the problem
Profiling makes it possible for highly sensitive details to be inferred or predicted from seemingly uninteresting data, producing derived, inferred or predicted data about people. As a result, it is possible to gain insight into someone’s presumed interests, identities, attributes or qualities without their knowledge or participation.
Such detailed and comprehensive profiles may or may not be accurate or fair. However, increasingly such profiles are being used to make or inform consequential decisions, from finance to policing, to the news users are exposed to or the advertisement they see. These decisions can be taken with varying degrees of human intervention and automation.
In increasingly connected spaces, our presumed interests and identities also shape the world around us. Real-time personalisation gears information towards an individual’s presumed interests. Such automated decisions can even be based on someone’s predicted vulnerability to persuasion or their inferred purchasing power.
Automated decisions about individuals or the environment they are exposed to offer unprecedented capabilities to nudge, modify or manipulate behaviour. They also run risk of creating novel forms of discrimination or unfairness. Since these systems are often highly complex, proprietary and opaque, it can be difficult for people to know where they stand or how to seek redress.
Why this matters
In the future people will be scored in all aspects of their lives, societies will be managed invisibly, and human behaviour will be under the control of the few and the powerful. If data from different walks of life may feed into consequential decisions, this can result in chilling effects. We would not want a world where people have to pre-emptively self-censor their on-line and off-line behaviour because the data it generates might be used against them.
When our profiles are used to make consequential decisions about us and our environment, this can have significant consequences for individuals – from credit scoring to predictive policing, from making hiring decisions to nudging and shaping human action.
Decisions can be discriminatory, unfair, and/or inaccurate. On the one hand, inaccurate or systematically biased data can feed into profiles, which may lead to biased or discriminatory outcomes. At the same time, the process of profiling itself may generate data that is inaccurate. Individuals can be misclassified, misidentified or misjudged, and such errors may disproportionately affect certain groups of people. In fact, profiling creates a kind of knowledge that is inherently probabilistic.
Human intervention over consequential decisions is often proposed as a possible response. Consequential decisions are decisions that produce legal or similarly significant effects. Automated decisions are decisions that have been made without any form of meaningful human intervention. Human intervention is only meaningful if the human intervening can critically assess how a system has made a recommendation and is authorised to decide against it. For instance, if an individual is assigned a risk score and this score shapes a decision but the person who is making the decision cannot critically assess the score, the decision de facto automated.
Ultimately, an environment that knows your preferences and adapts itself according to these presumed interests raises important questions around autonomy and the ethics of such manipulations. Personalisation of not just information but also our perception of the world around us will become increasingly important as we move towards connected spaces, like smart cities, but also in augmented, and virtual reality.
What we would like to see
We would like to see a world in which individuals are not subjected to arbitrary, discriminatory or otherwise unfair decisions that they are unable to challenge, correct or question the grounds and the process.
We would also like to see a world in which there are no secret profiles of people, that people don’t have to fear that their profiles lead to decisions that limit their rights, freedoms and opportunities.
Individuals will be able to know when their experiences are being shaped by their data profile, e.g. from targeted advertising to news to access to services, and be able to object and shape their profiles.
What this will mean
Data protection frameworks around the world need to address the risks arising from profiling and automated decision-making, notably, but not limited to, privacy.
People will know when automated decision making is taking place and the conditions to under which they are taking place, and have the right to redress.
Essential reform actions
Frameworks that have already incorporated profiling and automated decision-making need to make sure that the provisions cover all human rights critical instances of automated decision-making and account for the fact that different degrees of automation and human involvement can lead to similarly harmful outcomes.
Loopholes and exemptions in data protection law around profiling must be closed. Not all data protection laws recognise the use of automated processing to derive, infer, predict or evaluate aspects about an individual. Data protection principles need to apply equally to data, insights, and intelligence that is produced.
In addition to data protection laws, and depending on the context in which we are seeing automated decision-making, additional sectoral regulation and strong ethical frameworks should guide the implementation, application and oversight of automated decision-making systems.
When profiling generates insights or when automation is used to make decisions about individuals, users as well as regulators should be able to determine how a decision has been made, and whether the regular use of these systems violates existing laws, particularly regarding discrimination, privacy, and data protection.
Public sector uses of automated decision-making have a special responsibility to be independently auditable, testable.