Taming Pegasus: A Way Forward on Surveillance Tech Proliferation
As revelations about the abuses of NSO Group's spyware continue, we take a look at what is being done around the world to challenge the surveillance tech industry and the powers they sell.
- Recent revelations have once again highlighted the threat posed by the surveillance tech industry
- Around the world, activists and others are challenging the industry and the powers it is selling
- We take a look at some of the things that need to change, including campaigns targeted at governments, companies, and at people.
As Amnesty International and Forbidden Stories continue to publish crucial information about the potential targets of NSO Group’s spyware, we know this much already: something needs to be done.
But what exactly needs to be done is less obvious. Even though this is not the first time that the world has learned about major abuses by the surveillance industry (indeed, it’s not even the first time this month), it’s difficult to know what needs to change.
So how can the proliferation and use of surveillance tech – sold by over 500 companies worldwide by our last count – be challenged?
Below, we take a look at some of the strategies activists and others are pursuing around the world to promote protections. These can roughly be classified as targeting governments, industry, and people.
To be clear, there are no easy solutions here, and there is no mechanism that will protect everyone, all the time, everywhere. Rather, appropriate protections can vary hugely, depending on factors such as the company involved, the technology they are selling, who it is being used by, and where.
Targeted at Governments
Change the Laws
The legal framework governing the use of surveillance is the most obvious place to start. If a law exists, which can effectively provide safeguards on the use of a technology, the theory goes, rights will be protected because such laws will limit how technology can be used, regardless of what is available on the market.
For example, ensuring that surveillance is only carried out on the basis of a warrant that is approved by a judge is a key safeguard. There are numerous other principles and safeguards which must be in place – from legality, meaning that a power must actually be prescribed by law, to ensuring that those subjected to unlawful surveillance have access to remedies. A list of such safeguards as they apply to the type of spyware sold by NSO is available here.
When it comes to tools such as NSO’s, only a handful of countries, such as the United Kingdom, Germany, and Sweden, currently have laws which aim to specifically govern the use of malware by government agencies, and they all - for different reasons - fall short of the standards required by human rights law.
Challenging the laws which don’t comply with human rights standards and advocating for ones that do is the bread and butter of a lot of surveillance activists around the world; for example, PI has successfully challenged and won several cases which resulted in placing limits on how surveillance powers are used, both before domestic and international courts.
One notable recent example of this was achieved by the amaBhungane Centre for Investigative Journalism and journalist Stephen “Sam” Patrick Sole in South Africa. After learning that state spies had been recording Sole’s phone communications for (at least) six months in 2008, they challenged the constitutionality of certain sections of the regulatory framework of South Africa. Earlier this year, the Constitutional Court of South Africa declared that bulk interception powers used by the South African intelligence agency were unlawful and invalid. The agency subsequently confirmed the suspension of the powers.
The pros of such litigation are obvious: it has a direct impact on how technologies are used, placing strong protections over their use.
However, such a strategy is slow, with legal challenges sometimes taking years to reach their conclusion, during which time the technology or the laws themselves may have moved on. And their effect will very often be limited to the concerned jurisdictions only.
One avenue that has been effective, particularly in the United States, is the use of laws which promote transparency over the procurement and subsequent use of surveillance technology and that allow people to have a say in what tools authorities use.
For example, the ACLU’s Community Control Over Police Surveillance (CCOPS) initiative provides a model law which local government authorities can adopt. The model law ensures, for example, that the police have to get approval from local authorities before purchasing surveillance technology, that authorities have to solicit feedback from the public and establish an accessible use policy, and that they provide annual reports for the public and local authorities with the aim of keeping them accountable. Such laws may also contain provisions that ban the use of a certain technology, such as facial recognition.
In the US, as a result of the CCOPS inititiative laws have been passed in numerous cities, including Seattle, Palo Alto and San Francisco allowing anyone to see what technologies local authorities are using and their impact.
Outside of the US, similar efforts have recently got underway in the European Union (EU) with civil society organisations across Europe aiming to [convince the EU to pass laws banning biometric mass surveillance, of which Privacy International is a key part.
While such laws have promoted transparency, public engagement, and accountability, they have nevertheless not been applied to federal agencies which often have access to more sophisticated technology than police forces, or in countries with poor local representation or ineffective systems of governance.
Drivers of Surveillance
The security apparatus of a lot of countries in the world – arguably the vast majority – are highly dependent on and influenced by powerful countries with large military and intelligence budgets such as the US, China, or countries in the EU. These influential countries have the money, technology, and expertise, and provide them to others in order to influence them and gain their governments’ cooperation.
The US spent nearly $19 billion in 2019 on providing foreign countries with security aid, a part of which includes providing them with surveillance training and equipment. Ranging from the NSA, which equips counterpart intelligence agencies, to the Department of Justice, which provides legislative assistance on surveillance laws, a dizzying and overlapping range of US bodies and agencies currently provide surveillance aid to their foreign counterparts.
Similarly, China also provides countries around the world with such support, including as part of its Belt and Road Initiative under which it provides countries with training on controlling the internet.
The EU and its powerful member states also provide countries with surveillance aid, particularly to border agencies in their neighbourhood: Privacy International recently published hundreds of slides, obtained through a massive access to documents exercise, showing how EU bodies are training officials in Northern Africa and the Balkans on how to spy on people, and are providing them with wiretapping and other surveillance equipment.
Various UN organs also provide countries with equipment and training. For instance, the UN Office on Drugs and Crime provides countries with surveillance software used to extract data from mobile phones.
Taken together, there is no shortage of international donors willing and able to provide any government with the equipment, money, and expertise in how to spy on people. Indeed, there is likely to be even more potential sources in the near future as the US seeks to counter China’s growing international influence, with initiatives such as the Democracy Technology Partnership Act. The latter would equip countries with US surveillance equipment.
At Privacy International, there are two main priorities for change: first, these donor governments need to stop providing equipment and training which is likely to be used for human rights abuses, including spying on people without adequate safeguards. Secondly, these governments should instead invest their vast resources on promoting legal frameworks that safeguard against abuses of surveillance powers and ensuring that counterpart agencies do not engage in human rights abuses.
Countries with large arms and surveillance industries all have laws which govern to who they can sell their equipment. These export control laws require companies that sell specific types of surveillance equipment to apply to a relevant government department specifying things like what they want to sell and to whom. The government department can then approve or reject the application.
Such a process has stopped the export of some surveillance technology: for example, if the government has a policy to not allow companies to export technology if there is a risk it will be used for human rights abuses, they can deny the license. Indeed, NSO Group claims that the Israeli government has stopped them from exporting to some clients. Further, some governments also publicly report on the applications that they receive, and whether they have approved them. This offers an invaluable source of information on surveillance exports as it allows the public and parliamentarians to keep government departments accountable for their decisions.
While this sounds like an important mechanism for protecting human rights on paper, in practice it is a mechanism dominated by national security considerations, with human rights often playing a peripheral role. As with military equipment, which is also subject to export restrictions, governments tend to overwhelmingly allow such exports to go ahead for economic and security reasons. For example, the UK has approved over 300 license applications for the export of telecommunications interception equipment in the last few years, and rejected only 30 for human rights reasons. Further, the vast majority of countries do not provide any information on what exports they are approving or denying, opting to keep this information from the public.
For years, civil society organisation including PI have been trying to address these problems, calling on the EU among others to pass rules which will make human rights considerations a central consideration for export control authorities when assessing licenses and require them to publish data on their decisions to hold them account.
This September, after years of negotiations, new rules are set to come into force which we hope will do just that, though EU countries may still try to carry on as usual if they do not implement them responsibly.
While those protections are vital, countries such as Israel have consistently simply approved all but a handful of applications, relegating the entire process to little more than a rubber stamp. Further, to subject a technology to licensing requirement it is necessary to describe in technical language what is to be controlled: if this language is overly broad, it may have huge unintended consequences, such as undermining the ability of IT security researchers to share information among them.
International Human Rights Bodies
The UN and other international organisations play an important role in developing norms and maintaining international adherence to human rights laws.
Bodies such as the UN Human Rights Council can address human rights violations, initiate investigations, and make recommendations to governments around the world. Within the Universal Periodic Review mechanism, for example, anyone can submit evidence regarding a country’s adherence to human rights standards, and the Council can use this to make recommendations to the countries, and has done so numerous times on surveillance and privacy issues.
UN Special Rapporteurs also play an important role in protecting and promoting human rights law. Responsible on specific themes or countries, such as the right to privacy or freedom of expression, they have made important contributions to the promotion and awareness of human rights issues and regularly act upon reports of human rights abuses. In 2019, the Special Rapporteur on freedom of opinion and expression initiated a review on the surveillance industry and its interference with human rights, ultimately calling for an immediate moratorium on the sale, transfer and use of surveillance technology until human rights-compliant regulatory frameworks are in place.
For states which take their human rights obligations seriously, these institutions act as a force to develop norms and guide their behaviour and can act as an important international venue to which activists can take evidence even if they can be easily censored or ignored domestically. Yet, they have limited powers to enforce any recommendations.
While aggressive surveillance powers are often justified on the grounds that the first priority of government is to keep its people secure, such powers often actually undermine people’s security.
Spyware, such as NSO Group’s for example, relies on the exploitation of vulnerabilities in IT devices and networks in order to access devices and exfiltrate data. But these same vulnerabilities can also be potentially exploited by anyone else who is aware of them, including criminals and foreign nation states. By opting to exploit these vulnerabilities rather than reporting them to device manufacturers such as Apple, NSO Group and governments who use such methods are therefore intentionally undermining IT security.
Other surveillance techniques pose the same problem. In 2020 the Bureau of Investigative Journalism (TBIJ) exposed how surveillance companies are accessing networks in the Channel Islands from which they are able to track mobile phones around the world, relying on IT security weaknesses within telecommunications infrastructure.
The ultimate effect of this is that governments who do not prioritise cybersecurity are leaving people vulnerable to surveillance, including for example by adversarial foreign governments or malicious third actors.
Instead, responsible governments should leverage their resources into ensuring that IT infrastructure is as secure as possible, incentivise industry to maintain good cybersecurity, and punish those which do not.
Last week, for example, the German Federal Constitutional Court found that where the government identified a vulnerability in an IT system, it has an obligation to contribute towards protecting the users of IT systems against third-party attacks on those systems.
Targeted at Industry
Big Tech companies which have made fortunes from people’s data have a responsibility to ensure their customers are protected. Decisions taken in board rooms in places like California now have a monumental impact on billions of people’s lives around the world. Decisions by messaging companies to offer end-to-end encryption so that even they cannot read the content of communications being sent, for example, hugely limit the type of surveillance technology which can be used to read people’s messages.
The biggest companies such as Apple and Microsoft also have access to vast resources which they can use to protect their customers against surveillance. Microsoft, for example, recently helped civil society and IT researchers better understand and develop technical protections against malware used by another Israeli malware company, Candiru, and issued software updates to protect customers against their exploits.
WhatsApp has gone a step further and directly challenged NSO Group in US courts arguing that it is in violation of laws designed to protect against hacking. Civil society groups, including PI, other tech companies, and surveillance experts have all supported WhatsApp by intervening before the court.
Standards Setting Bodies
Big decisions on cybersecurity are not only decided in board rooms and government departments; there are a number of non-governmental and intergovernmental standards bodies which make important design decisions. Bodies such as the Internet Engineering Task Force (IETF), the Internet Corporation for Assigned Names and Numbers (ICANN), the World Wide Web Consortium (WWWC), International Telecommunications Union (ITU) and the 3rd Generation Partnership Project (3GPP) all develop protocols which govern much of how modern IT infrastructure functions.
What body decides what is currently a contested question; while bodies such as the IETF rely on ‘multistakeholderism’, roughly an approach that includes industry and civil society groups as well as governments, bodies such as the ITU are wholly comprised of governments, meaning that governments such as China’s are able to use them to promote their highly authoritarian internet standards.
Ensuring that these bodies prioritise cybersecurity and the open internet is makes it harder for surveillance to take place, given some types rely on vulnerabilities in infrastructure protocols. For instance, 5G standards contain significant mitigations against the use of some surveillance tools.
Resisting Unlawful Surveillance & Transparency
Governments around the world are passing increasingly intrusive laws which seek to force big tech companies to allow them to carry out surveillance. On a simple level, such laws may, for example, require a company to provide user data it holds to a government authority, when presented with a warrant as part of a criminal investigation.
However, these laws may also require companies such as telecommunications operators to give authorities direct access to their networks, meaning the company itself has no insight into what data is being collected. While China and Russia are perhaps most infamous for this, such ‘direct access’ is common in several other jurisdictions around the world and presents a huge threat to people’s rights. Similarly, other powers can also be used to force companies to allow government surveillance: in the UK, for example, the Government can also force a company to secretly change its product to facilitate surveillance by using what are known as Technical Capability Notices.
Here, it is important for companies which collect or have access to huge amounts of their customers’ data to do everything they can to ensure it is only accessed lawfully and in line with human rights standards. At a simple level, this may involve for example carefully examining requests for data from government agencies and refusing to provide information if the request is profoundly unlawful.
But this can also mean standing up to governments which try to do things like get direct access to their networks: recently, for example, the Global Network Initiative (a multistakeholder body which includes some of the biggest tech companies on the planet) issued an important statement urging governments to refrain from such practices.
Many of these companies also make public vital information about surveillance practices, which may include information on the number of data requests they has received and provided, how surveillance works in a country, and even if a system of direct access is in use. Such efforts not only allow activists and journalists to better understand surveillance systems and government practices around the world, but they also provide an important tool for holding governments to account.
Due diligence requirements
Companies that sell surveillance tech have a responsibility to protect human rights. As a part of the UN’s “Protect, Respect and Remedy” framework, these companies are obliged to ensure their operations avoid infringing on the human rights of others and should address adverse human rights impacts with which they are involved.
The Electronic Frontier Foundation, for instance, has a guide on ‘How Technology Companies Can Avoid Being "Repression’s Little Helper"’, which among other things encourages companies to refrain from participating in a transaction which enables human rights abuses.
Last September, the US Department of State issued guidance for surveillance companies on due diligence to avoid facilitating human rights abuses. Similarly, techUK and the UK government have produced guidance for companies aimed at allowing them to identify and manage human rights and national security risks associated with the export of cyber security products.
If a company is public, it means it should ultimately accountable to its shareholders. Convincing these shareholders to take human rights obligations seriously should therefore have a direct impact on the company they govern or on companies looking for their investment. Similarly, private equity companies which own companies such as NSO Group have the power to decide on its operations. That’s why civil society groups have urged NSO’s owners – private equity firm Novalpina – to be more transparent about its activities and take concrete steps to prevent, mitigate, and remedy adverse human rights impacts.
The broad field of environmental, social and governance principles are playing an increasing role in investors’ and funds’ decision making: recently several lenders cited such concerns for not providing financing to NSO Group, while Microsoft announced it would not invest in any company that sells facial recognition technology.
However, it remains to be seen how genuine this commitment is. For example, recently a shareholder proposal calling on Thomson Reuters to review their contracts with government agencies involving the provision of investigative software only received 17% of the approval.
Particularly at big tech companies in the US, employees are taking stands across a number of social issues, including surveillance.
Collective Action in Tech, which tracks collective action in the tech industry, cites some 413 such actions In 2018 for example, nearly five thousand employees at Google successfully demanded that the company terminate its contract with the US Department of Defense for Project Maven, a programme aimed at improve targeting for drone strikes.
Similarly, both Google and Microsoft both faced pressure from their own staff when they recently bid for the US Department of Defence $10 billion contract to provide cloud hosting, with Google dropping out due to conflicts with its “AI Principles”. Finally, the Tech Workers Coalition offers employees of tech companies committed to an inclusive and equitable tech industry guides and tips.
Targeted at People
Digital Security Training
The last line of defence against surveillance tech is for people to be aware of what surveillance technology is available and how to keep themselves secure.
At Privacy International, for example, we recently published a series of guides on high-tech police surveillance capabilities at protests, including tips and strategies about how you can protect yourself from being identified, tracked and monitored.
Simple actions such as updating software, avoiding opening suspicious email files, or enabling security protections such as 2-factor authentication can also have a huge impact on digital security of you and your loved ones.
As a result, digital security courses are common, and can help people in specific circumstances improve their awareness. However, it is important to understand that the threats posed are unique to everyone based on where they are based, who might want to spy on them, and what that person is looking to spy on. So while it is true that governments can covertly turn on a phone’s webcam, this doesn’t mean that everyone will be a target. And equally, just because someone uses an end-to-end encrypted messaging app, that doesn’t mean that they are safe from such hacking.
In order to avoid giving people a false sense of security or scaring them from using any tech whatsoever, such training needs to be tailored to the individual context. Here, the concept of threat modelling is important, as well as the understanding that there is no such thing as perfect security. Resources such as Security Planner offer can help people better understand such threats.
For that reason, while it is important for individuals to take steps to increase their awareness and protect themselves, the responsibility cannot be put solely on them.
An international response
As technology continues to play a more central role in people’s everyday lives, the threats posed by the surveillance technologies and the companies which sell them will continue to grow. To mitigate the threat, people, governments, and industry all have a role to play.
What that role is exactly is highly dependent on the context. While some mechanisms may provide protections for some people in some countries, they may not be realistic or effective elsewhere. A law which limits how a government agency might use NSO Group’s tech might have an effect in one country, but it might equally be useless in another where authorities fail to respect fundamental rights and the rule of law.
The key is to allow people and activists with an understanding of their context to drive this change.