What's the problem?

Companies around the world are collecting ever more data about their workers. This often happens through invasive surveillance technologies without workers knowing what data is being collected from them, for what purpose, how they can access it, or how to challenge these practices.

Whether the company is employing warehouse workers, part of the gig economy, or is a platform that hosts videos created by content creators, the data collected can be used to make automated decisions that can have major consequences for those impacted by them. In some sectors, algorithmic management has become the default and workers must give up an immense amount of personal data just to go to work. Tracking tools claim the ability to quantify productivity, identify less performing workers and optimise spending, but there is limited evidence of their effectiveness.

Decisions made by algorithms can determine how much individuals are paid and even whether their employment or accounts are suspended or terminated. Workers are often not provided with satisfactory explanations as to how these decisions are made. This lack of transparency means that decisions made through the "black-box" of an algorithm are seemingly impossible to challenge.

While gig economy workers, content creators and warehouse operatives are at the sharp end of the black-box, opaque and intrusive surveillance practices are embedding themselves across many industries and workplaces. The Covid-19 pandemic and the advance of working from home have provided fertile ground for the deployment of surveillance technology on workers: monitoring keystrokes and mouse movements to assess job performance.

Why does it matter?

Managing individuals through data-intensive surveillance can affect people's physical and mental health, put them in precarious financial positions, and result in unfair discrimination. This takes away from their dignity, agency, and autonomy.

Workers are being put in difficult situations where they have to consent to monitoring and surveillance in order to keep their job or continue their activity on a platform. The inherent imbalance of power between workers and employers forces workers to compromise their privacy and autonomy in order to earn a living. These harms can end up having long-term consequences for working relationships and employee morale, as workers feel further alienated.

Workplace surveillance and algorithmic management also allows companies to engage in unfair and potentially unlawful conduct with impunity due to a lack of transparency and accountability. This not only affects individuals but can even undermine the rule of law as a whole.

What's the solution?

Workers should not have to chose between their privacy and their jobs. The deployment of surveillance and algorithmic decision-making tools should only be done under special circumstances and with safeguards in place to avoid abuse.

There must also be transparency built in to workplace surveillance and algorithmic management systems. Workers should be able to know what data is being collected on them, for what purposes, and be given access to it. Under European data protection laws, for example, workers have a right to know exactly what information is being collected by their employer, how it is being used and shared, as well as how their personal data is being protected.

Employers should also consider the impact these systems have on working relationships and the morale of their employees. Employers should consider whether such monitoring is necessary at all, given the negative impact it may have on their relationship with their staff and their business in the future.

What is PI doing?

In the past, we have:

  • Worked with other organisations, including trade unions, to bring the harm that can result from sophisticated and opaque algorithmic management to the public's attention.
  • Researched the consequences that algorithmic management can have for workers who rely on platforms such as Youtube, Twitch and Pornhub for their livelihoods.
  • Informed people about how they can safeguard their privacy and the security of their data when creating content on platforms.
  • Researched how productivity suites like Office 365 offer features that enable employers to access employees' communications and activities without their knowledge.
  • Argued for improvements in the laws that protect individuals impacted by algorithmic management.

Now, we are deepening our understanding of the real world impacts of black-box management and monitoring how this can threaten people's rights, privacy and dignity. We want to understand who is most affected, expose those responsible, and take action to help people stand up for their rights.