A demonstration of how a compromised security camera could leak sensitive information through Morse code-like infrared signals. (Video Credit: Cyber Security Labs at Ben-Gurion University)

Editor’s Note: Welcome to my weekly column, Virtual Case Notes, in which I interview industry experts for their take on the latest cybersecurity situation. Each week I will take a look at a new case from the evolving realm of digital crime and digital forensics. For previous editions, please type “Virtual Case Notes” into the search bar at the top of the site.

Recent research proves that escaping the “internet of things” might not protect devices from being hacked. Researchers from Ben-Gurion University of the Negev in Israel have demonstrated that security cameras with no connection to the internet can be manipulated to both send and receive messages using only infrared light, in a so-called “aIR-Jumper” air gap attack. The only catch: the attacker would first have to infect the camera’s computer network with a special malware, which would require physical access, supply chain interference or social engineering of those with direct access to the system.

Most surveillance cameras have infrared LEDs which allow them to continue recording in the dark. Infrared light, though invisible to the naked human eye, illuminates the camera’s field of view and allows for “night vision” recording, even in complete darkness. Most of these LEDs will have different degrees of intensity for different lighting conditions, which activate automatically as the camera’s sensors detect certain conditions. The varying intensity of the LEDs is what researchers, led by BGU Cyber Security Research Center head Mordechai Guri, sought to manipulate in order to send coded messages from a camera to an attacker standing outside and recording the messages using their own camera.

I spoke with Chris Weber, co-founder and managing principal of Casaba Security, about these types of covert “air gap” attacks that rely on subtle changes in light, sound—even temperature—to send coded messages to and from an attacker and a compromised, non-internet-connected device.

“(Air gap attacks) require some sort of close physical proximity to the device, which is different from attacking something over the internet,” Weber said. “You actually have to be within a certain range of this device now, maybe 50 ft. or 100 yards, or whatever it is depending on what your mechanism is.”

This is different from an internet of things, or IoT, attack, in which attackers may gain remote access to an unsecured camera, router, toy or medical device—anything that connects to the web.

A demonstration of how an attacker could send covert, malicious communications to an infected camera by flashing infrared light signals in a Morse code-like fashion. (Video Credit: Cyber Security Labs at Ben-Gurion University)

Even when not tangled up in the internet of things, security cameras are still “things” that can communicate with other “things” and therefore provide a gateway for an attacker to access an otherwise disconnected system. Malware can cause a camera to methodically change the intensity of its infrared LEDs in a Morse code-like message to a nearby attacker—messages that could contain sensitive information like usernames and passwords—but the same malware could also take advantage of the camera’s ability to “listen” for different levels of light, receiving and processing potentially dangerous command and control messages.

“People are starting to think of different ways to access devices, especially sensory devices that have some sort of sensing equipment in them,” Weber explained. In these cases, attackers use devices’ sensing capabilities against them, transmitting messages that are harmful instead of useful (i.e. command messages to unlock a door or open a gate, giving criminals access to burglarize a home or business.) To make matters worse, these malicious communications are almost completely undetectable—not to mention unexpected.

The researchers’ paper explains that changes in the camera’s IR light intensity are nearly undetectable from the point of view of someone viewing the camera footage live, and that infrared light sent from an attacker to the camera’s sensors appears as a beam or flash of white light that might not cause suspicion, especially if it only appears briefly or from a far distance.

The paper says that cameras can be made to send out data in the form of coded, Morse code-esque IR light signals, that can be video recorded, processed and translated by an attacker, from a distance of tens of meters away and at a rate of 20 bit/sec, and that coded IR messages can infiltrate a camera and its malware-infected system at a rate of 100 bit/sec from hundreds of meters, or even kilometers, away.

Defending against this kind of attack would require disabling a camera’s IR capabilities, rendering it useless at night, or placing the camera in an area where no one can come within meters of it—impractical for surveilling a public area, such as outside a business or the street in front of one’s house. This may sound discouraging, but Weber says this type or research is less about warning against attacks that are realistically happening right now, and more about exploring vulnerabilities that others may not have thought of, which can ultimately lead to smarter and more secure devices.

“It’s always helpful to show new vectors and get people thinking outside the box. Even though they’re so interesting, it doesn’t mean they’re always going to be exploited in the real world,” he said. “This is a matter of device manufacturers (…) really just preventing the malware from getting onto their device in the first place.”