You are here

Case Study: Fintech and the Financial Exploitation of Customer Data

Date: 
30 August 2017

Financial services are collecting and exploiting increasing amounts of data about our behaviour, interests, networks, and personalities to make financial judgements about us, like our creditworthiness.

Increasingly, financial services such as insurers, lenders, banks, and financial mobile app startups, are collecting and exploiting a broad breadth of data to make decisions about people. This is particularly affecting the poorest and most excluded in societies.

For example, the decisions surrounding whether to grant someone a loan can now be dependent upon:

  • A person’s social network: Lenddo, described by a journalist as “PageRank for people”, provides credit scoring based on a person’s social network;
  • The contents of a person’s smartphone, including who and when you call and receive messages, what apps are on the device, location data, and more: Tala, a California-based startup that offers loans in countries including Kenya.
  • How you use a website and your location: the British credit-scoring company Big Data Scoring analyses the way you fill in a form (in addition to what you say in the form), and how you use a website, on what kind of device, and in what location.
  • The car insurer Admiral attempted to use information from young drivers’ Facebook posts to develop a psychological profile, and offer them discounts on car insurance.

 

What happened?

Financial services have begun using the vast amount of data available about individuals to make judgements about them and to use opaque artificial intelligence to rank and score people on their credit worthiness. As the founder and CEO of ZestFinance, Douglas Merill, put it: “We believe all data should be credit data”. ZestFinance uses their machine learning platform to work with financial institutions’ own data, as well as data from data brokers, to provide credit scoring. Their analysis revealed, for example, that borrowers who write in all-caps are less likely to repay their loans. In 2016, ZestFinance announced that the company had joined up with the Chinese search giant Baidu, to use their search data as the basis for credit scoring.

More ‘traditional’ credit reports contain data on issues like the status of your bank and credit accounts, and the positive and negative factors affecting your ability to get credit. In many countries, the data which can be included in credit reports - or must not be included - is regulated by law. Yet it is no longer the case that financial services consider only these finite sources of data relevant for credit-scoring or loan decision making: everything from what you post on social media to the type of phone you use has become considered relevant to financial decision-making. The use of these sources of data, rather than traditional credit files, is known as “alternative credit scoring”. Further, to analyse this growing amount of data, the decision-making process – often assisted by propriety artificial intelligence - becomes even more opaque; it is becoming harder and harder for an individual to understand or even query why they have been rejected for a loan or given a low credit limit.

Much of the discourse surrounding the use of alternative credit scoring, for instance, focuses on the notion of “inclusion”, and bringing in those groups who previously had no access to credit or financial services. However, there has been little consideration of the risk of exclusion emerging from the use of new forms of data by credit scoring companies.

For example, different groups of people use their phones and social media, in different ways. Some gay men in Kenya, for example, make use of multiple Facebook accounts, for their own safety and to give control over who knows what about aspects of their lives. However, as social media profiles are used to authenticate the identity of individuals, what impact does having multiple accounts have on the decisions made about credit scores? It has also been reported that if lenders see political activity on someone’s Twitter account in India, they’ll consider repayment more difficult and not lend to that individual.

Concerns surrounding data and varying degrees of opacity in often automated decision-making, extend beyond the credit sector, and are expanding across the financial services space. For example, in 2016, Admiral, a large car insurance provider, explored using the Facebook profiles of young drivers to provide discounts on car insurance. When prevented from doing this by Facebook, they turned to quizzes to try to profile the personality of young drivers.

The amount of data that the financial sector is gathering about our lives is increasing, and people are simultaneously being given limited options to opt-out of having their data collected and exploited in this way. An example of this expansion can be seen in the car insurance industry. Vehicles are increasingly becoming “connected” – meaning they use the internet in some way – and are basically drivable computers. Within the vehicles, telematics units collect data about how the vehicle is driven and how the internal components are functioning. This type of information is considered highly valuable by car insurers, which analyses how a person drives, as well as the locations they visit and when; this could be extended to look at how loud music is played in the vehicle, and more. Privacy International is conducting research into what data is held by the telematics units, and what data is transmitted by to the car companies and insurance providers.

 

What’s the problem?

More and more data are used to make or shape decisions that determine access to financial services, from sources that are far beyond the scope of what people might think as ‘financial’. There is more intrusion as more aspects of our lives are examined by the financial world to make judgements about who we are and what we might do. Because ever more data is used for credit scoring, this incentivises the generation of ever more data. The default option was previously (relatively) private – cash – but there is now many other alternatives. Plus, multiple actors are suddenly involved with your payment: depending on how you pay, it could involve your bank; the merchant’s bank; the credit card company; a mobile wallet provider; Apple or Google.

One example of this increasing, yet often hidden, loss of control for customers is financial companies’ collection of information related to how people fill out an online form, in addition to what they fill out within the form. For example, the scoring company Big Data Scoring does this through a cookie on a lenders’ website, that can gather data including how quickly you type in answers, what type of device you use, and your location.” Most people would not consider how valuable this data was in the decision making; the information you enter on a form becomes perhaps less important than how you fill it in.

There are potentially broad consequences for society stemming from financial data exploitation. How can people know, given the opacity of the decision-making, that the decisions being made are fair? And what about individuals who attempt to ‘game the system’ -  can we be sure that the comments made someone’s Twitter profile are their real thoughts, or are they an attempt to get a better rate on a loan?

 

What’s the solution?

With the judgements made by financial services being amongst the most impactful on our lives, it is essential that they are lawful and fair. As with the traditional credit file – where in much of the world we have a right to see the data that is stored about us, and lenders have a duty to keep this data accurate – we must have the right to know what data is gathered and used to make these judgements about us. We must have the right to challenge the decisions made about us.

Financial services are essential to living in the modern world, so it is essential that they operate in a way that protects our privacy. We must not live in a world where the default option is to generate the data that is collectable and able to be exploited by financial services. For example, we must retain the option to use cash, and to have this not be a morally-dubious option.

While it is the case that we will need to provide some data to financial service companies in order for them to provide a service that is suitable for our needs, it has to be understood that this needs to be weighed against the implications this has for our privacy. With its vastly increased scope of data, even the financial sector cannot leave us completely powerless in how we manage our identity, and the boundaries that we set to protect ourselves from unwanted intrusion.

Given that many firms are aiming their services at the poor and marginalised – those with a limited formal credit score – we are facing a problem that these groups are more likely to have to give up more data about themselves. We must design our systems to protect the rights of the marginalised and poor, while at the same time ensuring that they are not excluded.

Finally we must have regulation that ensures that the new growth of these new forms of financial services are not discriminating against certain populations, as well as to make sure we have the technology in place to ensure that the algorithms are auditable.