This post was written by PI Technologist Ed Geraghty.
At the very heart of ThornSec’s design is that we assume our security will fail. There is nothing perfect on this earth (except kittens). The entire point is to fail well. For charities and NGOs that are fragile, poorly resourced, and often at risk, this is relatively novel thinking. We prepare for that with a strict adherence to good security practice.
It is exactly this element of novelty that makes us more open than your average Open Source Software (OSS) project: not only is the code itself open, but the very files produced for the configuration of a given machine are written in such a way as to be easily inspected and (if so wished) to be run manually, line-by-line, without reliance on the ThornSec Graphical User Interface (GUI).
At the heart of the security model is the following religious tenet: there should be no place to hide, persist, pivot, or walk across a network configured by ThornSec. Pwning one machine shouldn’t give unfettered access to other machines on the network.
The first fundamental decision that was taken was Containerisation vs Virtualisation. The current trend in tech is to go for containerisation, and there are several reasons why this is sometimes a great idea — containers are a lot less heavy on resources than virtualisation, and often a lot faster. Virtualisation not only requires a full copy of an operating system per Virtual Machine (VM), but also all hardware is required to be virtualised. On the other hand, containers run on bare metal, and basically use magic in file systems to present different sandboxes to each container running on them.
There’s a major problem with containers, however. Containers do not contain.
Let’s say we were running a series of docker containers (which is a pretty common set-up these days). Now let’s say there’s a serious vulnerability which allows arbitrary escalation to root in this container. What does that mean for other containers on the same machine? Well, they’re pwnt too. Root in a container = root in the host. The attacker is now god over everything running on the same machine.
Now let’s compare that to root escalation of exactly the same software running in a VM. Well…the VM is pwnt, but short of a VM escape (which is 99% of the time), you’d have no contagion to anything outside of that VM. Not only that, but even if you did manage to escape to the HyperVisor, the likelihood is you’d only be able to execute commands as the VM’s user — and unless you’ve done something really wrong (like running everything as root…), this user will be unprivileged.
We understand that this puts a bit more pressure on organisations running ThornSec than if we’d gone with containers — however, the overhead is (in reality) very low, and is more than reasonable when we take into account the security it affords.
Running VMs also allows us to have control over what each VM can see, and allows us to have different, atomic sets of permissions and users per-VM in a way that doesn’t allow contagion up to our HyperVisor. Another thing it allows us to do is to only present to the VM exactly what we want it to be able to see from its HyperVisor (although the permissions system is the topic for another blogpost!).
Over the past 18 months, we have tried various ways of exposing storage “outside” of our VMs, and we eventually settled on the following: one VDI (“storage disk”) which holds the root filesystem, and another VDI (“data disk”) which holds any data that makes our VM unique, and is mounted on/media/metaldata with nobody:nogroup:0000 as its permissions. The separation between data and Operating System (OS) allows us to move towards fully ephemeral VMs, which can be torn down and rebuilt at will, whilst retaining their actual data. Our thoughts are that in the future, only the data disk will be persistent, with the storage disk (OS & packages) being fully ephemeral in a RAM-disk, rebuilt on a daily basis.
By mounting the data in this way from the HyperVisor, we are able to do iterative backups, and our (future) daily destruction of VMs will allow us to be confident we aren’t storing “Schroedinger’s backups”. By using a HyperVisor rather than a container host, we can make the actual HyperVisor completely transparent to the network — since it’s never accessed for anything, it may as well not exist. By storing the data for each VM outside of itself, we are able to do backups in a way that cannot be tainted by persistence.
The other major design point that led us to this path is that charities/NGOs, by their very nature, are fragile — certainly far more fragile than their commercial counterparts. We need to design this in such a way to enable it to be run by non-experts on a commodity laptop. By using VirtualBox over, say, KVM, Xen, or the plethora of other HyperVisors out there, we become host platform agnostic — it is irrelevant whether we wish to run this on a Mac, on Windows or on a flavour of *nix/BSD. The VMs and their settings can be swiftly moved between machines without the need for any Command Line Interface (CLI) commands, providing business continuity far beyond what most platforms provide.
We welcome any comments and feedback on the design decisions we’ve taken — this is an open approach to making our organisations safer in an increasingly dangerous world.