What's the problem?

At the moment, in part as a result of the Covid-19 pandemic, workers are facing increased surveillance from their employers. From surveillance software deployed on their work devices to popular productivity suites like Office 365 or Slack offering features that could be used for surveillance by employers, employee monitoring solutions have gained in popularity over the last few years, boasting the ability to quantify productivity, identify less performing workers and optimise spending. These promises can sound tempting for employers but come with levels of intrusiveness and limited evidences of their efficiency that cannot be ignored, especially as they risk reducing workers' autonomy.

On the other side of this coin, many gig economy apps and platforms have made the collection of personal data and the monitoring of their users for algorithmic purposes the default. Job assignation, content promotion or demotion, strategic decision... algorithmic management by platforms is creating an environment where gig workers, platform workers and creators must give up an immense amount of personal data only to be able to work. Yet, taking part into these systems does not guarantee that they are able to access this data or even be provided with a clear explanation of what it might be used for.

As such, workers are being put in the difficult situation where they might feel that they have to consent to monitoring and surveillance in order to be able to keep their job or continue their activity on a platform. The inherent imbalance of power between employees and employers forces workers to compromise their privacy and autonomy in order to be able to continue working. These systems also place a lot of confidence into algorithms and automated decision making systems with little transparency regarding how they function and how they can be challenged. Although transforming workers' activities into data to make decisions about them might sound like a good idea from a cost-optimisation perspective, it disregards many of the things that make us human while openning the door to abuse.

What's the solution?

Workers should not have to chose between their privacy and their jobs. The deployment of monitoring and surveillance tools should only be done under special circumstances and with safeguards in place to avoid abuse, such as unrestricted access to the data collected, as well as transparency about what the means and goals of such system are. Software monitoring can easily result in privacy violations compared to less intrusive solutions, and can thus be unnecessarily or disproportionately invasive.

Whether they are employees or working for an app or a platform, workers should be have transparency about what data is being collected on them, for what purposes, and be given access to it. Under European data protection laws, for example, workers have a right to know exactly what information is being collected by their employer, how it is being used and shared, as well as how their personal data is being protected.

Employers should also consider what impact monitoring systems and automated deicision making systems will have on the morale of their employees, and the risk such software poses in alienating employees. Employers should consider whether such monitoring is necessary at all, focusing on the negative impact it may have on their relationship with their staff and their business in the future.

In the case of gig economy apps and platforms, the collection and processing of workers' data should be subject to the same level of attention and also be legally justified before deployment.

What is PI doing?

PI has researched multiple workplace surveillance technologies used by employers and platforms, both before and after the Covid-19 era. We have researched the data that gig economy companies collect about their workers, the deployment of surveillance features in widely used productivity suites and the relationship between content creators and the platforms on which they publish, both from a privacy and a security perspective.

This work is informing our growing advocacy to protect workers against discriminating algorithmic management systems and privacy-invasive technologies.