Case Study: Invisible Discrimination and Poverty

Case Study
Case Study: Invisible Discrimination and Poverty

Introduction

Online, and increasingly offline, companies gather data about us that determine what advertisements we see; this, in turn, affects the opportunities in our lives. The ads we see online, whether we are invited for a job interview, or whether we qualify for benefits is decided by opaque systems that rely on highly granular data. More often than not, such exploitation of data facilitates and exacerbates already existing inequalities in societies – without us knowing that it occurs. As a result, data exploitation disproportionately affects the poorest and most vulnerable in society.

 

What happened?

The gathering of data about us that determine what advertisements we see is usually invisible, with users knowing very little about what data is gathered and how it’s analysed. What we do know, however, is that results can be discriminatory: a study of the ads displayed by Google on news articles, for instance, are far more likely to show adverts on “executive-level career coaching” if Google believes the user is a man. In 2013, Harvard University’s Latanya Sweeney found that when searching for names online, black-identifying names are much more likely to get an ad suggestive of an arrest record.

Such discrimination can have real world consequences, most evidently perhaps in housing. Neighbourhoods can entrench some populations in poverty. For example, in the US, black people are far more likely than white people to live in areas of concentrated poverty, which has an impact on other aspects of their lives from access to jobs, education, and even happiness. Targeted housing adverts online can become an important part of this, since they decide which neighbourhoods are available to certain populations.

Employment is another field where automated decision making by opaque systems raises concerns. There’s an extent to which this was developed to prevent the type of “old boys network” that would lead to people only giving jobs to people that they know. But we cannot assume that algorithms are fairer than human beings; and systems are often trained on historical data and thereby replicate the existing biases of an employer. In the US, 72% of CVs are never read by a person, but rather are sifted and rejected by computer programs. The job applicant with knowledge of these systems will attempt to game the system, or at least influence the results, by mentioning the keywords in their CV that they think the system will be looking for. Personality tests are another tool that is used in job testing, where a questionnaire filled in by the applicant is used to reject candidates. The risk of discrimination is high here also – these are essentially questions about a candidates’ mental health, thus preventing people with mental health conditions from entering the job market.

There are also ways in which the tech we use deepen existing social economic inequalities. For example, while the smartphone market in India is expanding quickly, most people buy phones that use older versions of the Android operating system; 72% do not include the latest version at the time of purchase. For many of these phones, an update to a newer, more secure version of Android is simply not made available. As a result, the poorest are put in a position where their devices are less secure, and at risk of attack. The poor are denied the opportunity to be secure online.

 

What’s the problem?

From targeted advertising, to housing and employment, opaque, and often unaccountable systems have the power to reinforce inequalities in society. They can do this by excluding certain groups of people from information, the job market, or limiting their access to the benefits system.

The problem is that such discrimination can be invisible and even unintended. In the case of advertising, for instance, specifying “Whites Only” is illegal in many countries around the world. But, there are other characteristics that can be used to determine, with some degree of likelihood, a person’s race: musical tastes, for example, or identifying certain “hipster” characteristics that are more likely to apply to one race over another.  As a result, an apartment that is specifically targeted at people with certain tastes and interests can inadvertedly exclude entire sections of the population.  Those affected, will not realise that they been harmed at all. In the words of Michael Tschantz, a Carnegie Mellon researcher: “When the nature of the discrimination comes about through personalization, it's hard to know you're being discriminated against”. As a result, it’s difficult to challenge decisions that are unfair, unjust, or simply unlawfully discriminatory.

Another dimension to this problem is that those who are already marginalised are much more likely to be subjected to opaque and automated systems in the first place. While executives are headhunted, the CVs of those who work on low paying jobs with high turnover are subject to automated sorting. Another area where this is most evident is social security, and government benefits. In 2014, Poland introduced the profiling of unemployed people: based on a computerised test, people are placed into one of three categories. These categories affected the level of support that benefits recipients received from the state. However, the algorithm, and thus the decision-making that affects the lives of benefits participants, is kept confidential. This lack of transparency, and the risk of discrimination, is problematic for this vulnerable population.

 

What’s the solution?

Security and privacy must be shared without discrimination, designed into systems to protect everyone, everywhere. 

It is essential that our systems are designed to protect the excluded, marginalised, and poor. Given that it is so easy for these groups to be denied access to services and opportunities through algorithmic decision-making, we must design our systems to make sure that these groups are not disadvantaged.

 

People should be able to correct, shape, or refresh their profile or derivative data. Data-driven inferences, predictions and judgements about a person are processes that generate data. As such they should be lawful, fair, and not excessive. The data that is generated should be fully disclosable to a person and that person should be able to challenge data-driven judgements about them, in particular when these are used to inform consequential decisions about them.

The fact that opaque systems are making consequential decisions about people’s lives, means that it is essential that the nature of the decision-making is made explicit to the individual. Similarly, researchers, regulators, and watchdog organisations should be able to audit such systems for systematic biases. Given that the tools for employment, for example, are widely used – particularly for entry-level jobs – it essential that people are able to understand how these decisions about them are made in order to prevent a cycle of rejection, and contest illegal discrimination and unfair treatment.