What security means for NGOs (and why we do it badly)
There are three good reasons why security is so hard for NGOs. First, we are afraid to speak about meaningful security. Second, we focus on the wrong areas of security and in turn spend money and prioritise the wrong things. Third, we struggle to separate the world we want from the worlds we build within our own organisations. At PI we have failed and struggled with each of these for over 20 years. Out of exhaustion, we decided to do something about it: we are building an open framework, a project we call Thornsec.
When it comes to organisational security, NGOs, like individuals, are too often starting from a feeling of insecurity. We are too often told we are stupid or powerless.
'I can't believe you don't use' [Tor, Signal, GPG, open source, ...]
'I can't believe you use...' [Microsoft, Macs, X-year old computers, Gmail, ...]
'If the intelligence agency wants in then there's nothing you can do.'
'If you're focused on keeping intelligence agencies out then you're focusing on the wrong problem.'
There's so much to worry about and so much snarkiness that we end up making people feel stupid or powerless. To be clear, nothing in the above statements is wrong. That's why it's even more frustrating.
We must be more supportive. We mustn’t denigrate. We need to take people on a meaningful process to understand threat, appreciate risk, do analysis, and make hard decisions.
Of course it's great that so much of the common debates about security focus on end user devices. We've all been asked the question 'is an iPhone more or less secure than an Android phone?' The ability to ask the question, for an NGO, is a luxury.
Rather, organisations may find it more helpful to ask themselves the following question: "How would we know if we have been subjected to surveillance or breach?"
Answering this question is essential, but we rarely invest in that and instead look to buy the best device or choose the best application. Those purchasing questions won't on their own stop data breaches. At a technical level, we've seen how attacks against some companies have led to gigabytes of data being infiltrated without them keeping track.
To address this risk, we've worked to segment services and access in order to identify aberrant behaviour. Why are two computers on a network communicating directly to one another? Why is your printer connecting to the outside world? Being able to prevent and also made aware of such attempted abnormal behaviour on your network is necessary and this would have been be helpful in preventing the propagation of recent ransomware attacks.
We are trying to make this easier. This is why we are building our framework, Thornsec, that will allow us and any small organisation to segment services and access.
Harder solutions to easy problems
But the easiest problems are harder than this. Too often we look to encryption when we aren't doing basic things: good passwords, detecting phishing attacks, patching systems, and doing backups. We need to be promoting password managers that help people have stronger and diverse passwords. We need to be helping our colleagues to identify phishing attacks and changing our communications practices so that a phishing email would seem alien rather than a dangerously cunning adaptation of your ways of doing things. (e.g. Why does your head of finance ever receive internal emails asking for foreign transfers? Why do you share links over unsigned email?)
Updates and backups are what we are trying to address particularly with Thornsec. There must be ways to rebuild an organisational infrastructure with ease if it has been attacked, compromised, your offices have been flooded, a laptop has been lost, etc. And there must be a way to regularly know what kind of updates are needed to your services, e.g. applications and operating systems, and implementing them in safe ways and consistently.
It's impossible for a privacy advocate to give a talk without a question coming up: 'which tech and apps and services should I use?'
There's really no good answer for everyone without getting into religion. Equally it's frustrating when we respond that the answer is complex. Inherently we mistrust companies who confuse service provision with exploitation of our data. Our organisation does not want to be the product that they're selling.
If you leave aside that point for a moment... Sometimes a well-known global company's services are the best answer for some organisations; sometimes getting your data out of your country is the optimal solution, even to American or Russian firms; sometimes WhatsApp is the lesser of evils, and sometimes paying for Apple's attractive devices is proportionate to your needs. PI or PI staff all do elements of the above.
Some industry leaders (and they have to be heavily scrutinised to ensure it's really the case), invest in security. Systems that are secured by investing vast amounts of funds to keep other parties out (while often it still empowers them by design to exploit your data), that use encryption in transit (at a minimum) and at rest (though they generate metadata), run bug bounties and frequently update systems and products in an auditable manner. We all need to increase pressure on them to ensure this is truly the case.
If the resources are available, my religious answer is this: use open source systems that are regularly updated, run them locally on your own systems, and keep the attack surface small. But my religious answer could be very dangerous to you. Running your own services badly could be more dangerous than outsourcing a well-designed service to an adversary. Our objective with Thornsec is to make this easier through virtualisation, separation and segmentation, and to ensure that if there is a breach or hack, organisations are able to fail well.
So yes, I am a fervent believer in a world where NGOs run our own services and keep processing under our control. As a privacy organisation, we pressure companies on security for the public but also because we want it for ourselves. We want to minimise attack surfaces locally and generally, and minimise metadata that is accessible to companies and in turn, others.
This is the world we at PI are working toward. The risk environment for NGOs is increasing dramatically, with threats arising from domestic and foreign powers. We know governments are going to demand access to data held by third parties. We want to see a world where companies can't be demanded to hack their customers' services or devices. We work hard to resist government policies that undermine the ability of industry to build more secure systems but we also don't trust industry to act in our interests. So our proposal is to strengthen the path to a different world: one where NGOs have the ability to secure themselves reliably.