Back to overview

The Airgap-Myth

Mythos Airgap: eine Insel ist kein wirksamer Schutz. Und sind die Angreifer erst einmal dort, habe sie oft freie Bahn.If it’s online it can be hacked…

Airgaps are often claimed to be one of the few ways to shield a system completely from attackers. The underlying argument is simple – “our system has no connections to the outside, so no attacks can take place and we do not have to secure the system”.

Popular variations of this kind of thinking are “We are selling an insular system, we do not permit our customers to connect it to their networks” and “Our system is so very proprietary, it can not be attacked”.

Especially the last two arguments are often used by ICS manufacturers (IBM Cabling System, a proprietary cabling system) or manufacturers of niche products (e.g., TETRA systems, mobile phones, …).

… if it’s offline… it can still be hacked.

The reality, on the other hand, is somewhat different.

True, an airgapped system is not reachable from the Internet – an attack over, for example, a remote code execution gap is not possible. It is also not possible for an employee to receive infected e-mails (ie e-mails with malware in the attachment) on such a system.

But even airgapped systems interact in some way with users and the outside world. Completely self-sufficient systems have virtually no application in IT. In practice, information is generally collected, processed and then used for some kind of purpose. This is usually defined and controlled a user of the system.

It may sound misanthropic, but unfortunately it is a fact: every user is a potential attack vector.

If the operator solely relies on the airgap for protection the malware basically has no limits to what it can do, once it has sucessfully reached the system.

Computer viruses can be airborne, too

The interaction with users can be used and in fact is used to manipulate and infect insular systems with malware. An excellent example of this is Stuxnet. Stuxnet infected any computer it came in contact with and thus managed to reach the actual target system after some time via an infected USB stick. The target was – you guessed it – a system that had never been connected to a larger network or the Internet.

Vector 1: USB-Stick from Hell

But one does not have to dive into the world of the secret service to encounter this technique. Part of a nearly every Tiger Team Assessment is the classic trick of * losing * infected USB sticks in the car park / toilet of the client. (Recommended reading: article about the study of found USB sticks) Curiosity is a deeply rooted property of the human mind. Here it becomes a trap: if such a USB stick is connected to the target system, the airgap has effectively been breached.

This is not only true for USB sticks: from a technical point of view, any MP3 player, any digital camera and any mobile phone connected via USB can become the transmitter.

Vector 2: Poisonphone

it is unlikely, that employees – no matter which company they work for – do not own a smartphone. There are more smartphones in Germany than people. According to estimates, over two billion smartphones are used worldwide. And the number is increasing as you read this.

What is less apparent to many: a smartphone holds a whole collection of complex attack vectors. If a mobile device locates a WLAN, it tries to connect automatically. If the user permits this, he creates – possibly unintentional – a network bridge to the Internet. This also applies if the mobile phone is connected to a computer. While employees proudly show their vacation photos on the large monitor, the bridge remains intact. The same applies, of course, to private laptops brought along.

A smartphone without Internet access makes no sense, so it must be assumed that practically all of them come into contact with a frightening amount of different cyber threats. Virus scanners are still very uncommon among users (in fact it can be argued whether or not they are necessary at all), and the existing apps are designed to protect the phone or tablet itself, not the interruption of the infection chain for malware that targets entirely different systems.

Anyone who thinks an apple on the device would protect against such risks per se is unfortunately wrong. The XCode ghost story (details can be found here) proves that any app store can (and will) be undermined.

Weg 3: Giftiges Update

Dubious sources are not always the source of the attack. Even self-contained systems require an update every now and then. An insular system can also be attacked via this route: Can it be guaranteed that all the computers involved in the development of the update have been fully secured over the entire developement period? Have all the libraries used during programming been scanned continuously and reliably? Are the IDEs (Integrated Development Environment) with which the developers have been working constantly monitored to ensure that they have not, inadvertently, inserted malicious code into the update? It has happened before,  we reported about it.

No softwarevondor I know would ever take legal liability for any one of these points.

Practically inrealizable

Exfiltration of data (if this is intended at all) is, of course, a further challenge. But there are solutions for this either. And often perpetrators just seek to deal as much damage as they can, or blackmail corporations for not doing the damage they could do.

The fact that various manufacturers design their systems as an island system and record this in their policies of fair use is nice, but in most cases the practical implementation is unrealistic. Many of these systems have, in some way, a connection to a standard enterprise network, e.g. by remote HMIs, data collectors, or maintenance access for manufacturers. Such a “conceptual airgap” is therefore unsuitable as a security feature.

Proprietary systems in practice are nowadays much less proprietary than manufacturers would admit. They often run their own software – which actually uses completely proprietary protocols – on standard Windows servers or management clients that still use Windows XP.

If manufacturers supply their own hardware, they often have their own PCI (e) cards, which in turn have been installed in standard servers. For the core product of the manufacturer, it might be fact that no security gaps are known (because of the proprietary nature) – but this is rarely true for the infrastructure around the core product.

Conclusion: Airgaps can be useful, but only as extension

In some cases, it makes sense to keep highly critical systems as far away from the Internet as possible, as well as from the corporate network. Consciously plannig in an airgap is the ultimate ratio in doing so. However, this measure must not be regarded as a replacement for all security, but merely as a supplement.

If you really want state of the art protection, what you really need is a comprehensive security strategy. However, this strategy must cover the entire IT landscape, not just a few core systems.

A strong combination of App Security, Endpoint Protection, Networksecurity and continuous training ensures that a system designed to be an island actually stays secure.

Only then your island will be a safe haven.

Leave a Reply