nav search
Data Center Software Security Transformation DevOps Business Personal Tech Science Emergent Tech Bootnotes BOFH

Air gaps: Happy gas for infosec or a noble but inert idea?

Spooks and boffins jump 'em, but real-world headwinds remain strong

By Darren Pauli, 11 Feb 2015

Feature Last year Michael Sikorski of FireEye was sent a very unusual piece of malware.

The custom code had jumped an air gap at a defence client and infected what should have been a highly-secure computer. Sikorski's colleagues from an unnamed company plucked the malware and sent it off to FireEye's FLARE team for analysis.

"This malware got its remote commands from removable devices," Sikorski said. "It actually searched for a specific formatted and hidden file that was encrypted, and would then decrypt it to access a series of commands that told it what to do next."

External network links are the lifeblood of most malware. This sample provided the means for malcode to be implanted on victim machines and served as the command and control link over which stolen data could be shipped off to attackers, allowing additional and further infections.

Sikorski's unnamed malware used employees to spread to other machines and distribute commands. Attackers hacked internet enabled computers they knew staff with access to the air-gap machine would use and turned any external storage device in into a digital bridge.

Those bridged machines allowed Sikorski and his colleagues to retrieve the malware, allowing them to establish that it was part of a wider attack on air gapped machines.

Their analysis showed the malware could be told to conduct reconnaissance, seek out particular pieces of valuable information, list directories and execute new malware carried over on the staff thumb drives.

"Somebody would come by, plug in their stick, pull the drive out, and all the commands would have been run. The malware is still resident on the system so next time a drive is plugged in, it could receive more commands."

Into the lab

Such attacks are intriguing because it is often assumed that a few feet of air implies extra security: hackers need a network on which to operate, so air's non-conductive properties (for data) are therefore seen as the last word in security. It therefore generates no shortage of intrigue when that theory is disproved and an isolated computer is breached.

(l-r) FireEye FLARE engineers Matthew Graeber, Richard Wartell, and Michael Sikorski.

But as Sikorski's tale proves, air gaps can be beaten.

And around the world, researchers are proving it's possible, sometimes with outlandish means of bypassing physical security such as sucking data out of monitors and speakers.

The hack Sikorski and pals identifed came out of Israel's Ben-Gurion University where four hackers at the cyber security lab had honed an attack through which already-infected air-gap computers could exfiltrate data to passing mobile phones through FM radio signals emitted by video cards. The AirHopper technique, which in other forms has been around for decades and was according to leaked Edward Snowden documents popular with the NSA and other spy agencies, used off-the-shelf hardware to funnel information off infected systems at a distance of up to seven metres between machine and attacker.

"This kind of attack scenario assumes the air-gapped computer is already infected by malware by means of a USB stick or malicious files copied to the computer," chief technology officer of the Israeli university's security labs Dudu Mimran says. "Such infection can take place at any time before and can be very fast since it does not involves the actual data leakage and as such can go unnoticed. Later on the malware can leak the data from the infected computer, either the current data being typed on the keyboard or existing documents from the computer."

Compartmentalising and delaying data theft until after infection meant there was less chance the loss would be detected, Mimran says, and allows an attacker to set a trap to steal credentials from staff who subsequently use the machine. He assumes though cannot prove the attack is being used in the wild to bridge air gaps.

Others have found means to suck data without first needing to infect an air-gapped machine. In December Australian security governance boffin Ian Latter rocked the Kiwicon hacker confab with an attack that demonstrated how data can be exfiltrated through monitor pixels in an attack that bypassed known detection methods. Attackers would need physical access to machines to install a commercial HDMI recording device and an Arduino keyboard that left no traces of the attack for forensics to analyse, aside from perhaps close circuit television footage of the manipulation.

Latter, also the author of the ThruGlassXfer attack, also likes an attack developed in 2012 using a do-it-yourself USB human interface device where data was hoovered off air-gap systems using booby-trapped CAP, number and scroll lock keys on a keyboard.

"I can't go past it for its sheer subtlety - is your organisation tracking the CAPS-lock status on every device?" Latter says, also giving a nod to Mimran's work. The attack was improved in 2013 to exfiltrate data at a faster rate of 10 Kbps and may have been used as early as 2008.

Other air gap attacks are also intriguing. Consider BadBios, the bizarre case of malware reportedly capable of spreading over airwaves, self-healing, and persistence. It stood a tidal wave of doubt as the discoverer of the attack and also its victim, Dragos Ruiu, is a respected security researcher. The rootkit detailed in late 2013 could reportedly hop air gaps, survive motherboard firmware rewrites and mess with a variety of operating systems.

Less than a month after Ruiu reported the malware messing with his personal computers, German geeks Michael Hanspach and Michael Goetz had concocted an attack in which malware could slowly spread between nearby computers using microphones and speakers. That attack was an adoption of robust communications for covert theft using the near ultrasonic frequency range.

Rube Goldberg machines?

It is difficult to say how many of the publicly reported air gap attacks will work outside a lab. Latter's attack and Sikorski's malware certainly did, as did the NSA which a year ago was found to have built systems capable of stealing data from air-gapped machines which had a malicious USB device attached. That thumb stick shouted out to an NSA spy some 13 kilometres away using a "covert radio frequency".

Yet some demonstrated attacks will fall over in messy real-world scenarios where conditions are imperfect and dynamic. "While this type of research excites a lot of interest, the realities are often impractical and rely on too many variables to yield viable results," says Symantec's John-Paul Power who recently reviewed the potential real-world impact of air gap attacks and found many to be largely complex and a preserve of dedicated criminals or cashed-up nation-states.

"Put simply, the fact that an organisation has an air-gapped network in place suggests that it's likely to represent a high-value target. As such, any organisation with an air-gapped network will probably be aware of the fact that there are ways to breach these gaps, however unlikely some of them are," Power says. "The main targets will be small pieces of data, such as login credentials and encryption keys that will allow hackers to breach confidential information."

Get physical

The art of air gap defence seems down to smart physical security that would keep unauthorised staff away from highly sensitive areas, and banish removable devices where these attack scenarios pose a sufficient risk.

Sikorski has clients who have poured glue into their USB device slots, a measure which he says is a reasonable, if not irreversible, idea. "That's pretty good defence if it your machine is unplugged from the internet and you can't plug things into it," he says. "If the slots are open then you can give the devices to the cleaning crew or whoever might plug these things in for you."

Physical security requirements become more difficult in remote office scenarios where systems such as closed circuit TV and security guards may be far weaker than at headquarters. Controls like door security checks, human guards, and war rooms were in place at high security entities and went some way to keeping air gap systems safe. But this consistency could be difficult when organisations had distributed offices that may not shell out for the same baseline of physical security.

"Effective physical controls will minimise attacks on air-gapped systems, and they don't need to be onerous," Latter says, adding that enterprises accordingly should move away from weak reactive legal, contractual and technical controls and reinstate physical security. "If I bleated out one DTMF tone per second for 20 minutes in the cube next to yours, wouldn't you tap me on the shoulder?"

Risk tolerance is critical, he says. Latter worked with his client organisations to articulate if there was an amount of data they were be prepared to lose before a proper security discussion could take place.

It isn't all physical, however. Organisations should implement security controls on air gap machines as if it were connected to the internet, a move Sokorski and Dudu say could help knock-out some of the laboratory attacks.

In 1973 MIT professor Butler Lampson conceded that the cost of consistently closing off covert channels like those a well-resourced attacker may use could be so high that data leakage was inevitable. That seems more true today than ever, and unless the threats to separated machines are fully understood and acted upon, security will remain as thin as a few feet of air. ®

The image at the top of the page is a piece of art called "Digital Montage Number 2" - a floppy disk painting by Nick Gentry. It's licensed under Creative Commons 3.0.

The Register - Independent news and views for the tech community. Part of Situation Publishing