Editor’s Note: Welcome to my weekly column, Virtual Case Notes, in which I interview industry experts for their take on the latest cybersecurity situation. Each week I will take a look at a new case from the evolving realm of digital crime and digital forensics. I am happy to be returning to the column this week after a month's break working on other projects. For previous editions, please type “Virtual Case Notes” into the search bar at the top of the site.

Two-factor authentication (2FA) adds an extra layer of security, meant to protect a user’s account even if their password and other credentials have been compromised. However, history has shown that one of the same issues that weakens single-factor authentication—users’ hesitance to expend any extra effort, whether to change their password frequently or memorize a password stronger than “pass”—also prevents them from using two-factor authentication, which typically requires the user to take additional manual steps.

Security researchers may not be able to force people to change their attitudes and habits when it comes to their own cybersecurity, but they can try to help the problem by reducing the effort required for the 2FA process, which could entice more users to embrace the option. One idea is to use a person’s environment, rather than their actions, to determine whether the user logging in is in possession of the second-factor device. A proposed system called Sound-Proof, created by researchers from ETH Zurich, records audio from around the two devices and then compares them to determine if the devices are in the same location. However, determined attackers could still mimic the audio environment of the legitimate user in order to fool the 2FA system, researchers from the University of Alabama at Birmingham found. For example, the attacker could call the victim, causing their phone to ring, then produce an identical ringing sound for the recording.

With these challenges in mind, researchers from the University of Alabama at Birmingham's Security and Privacy In Emerging computing and networking Systems (SPIES) lab have developed a new proposed system—called Listening-Watch—that seeks to combine the effortlessness of using sound with the unpredictability of a randomly generated five-digit code to resist both determined copycats and effort-resistant users. The system uses a wearable device—such as a smartwatch—to record the spoken numerical code played by the browser of the primary device, which is also simultaneously recorded by the browser. The audio and number must match between the wearable and browser in order for the log-in to be accepted.

“Our current implementation of Listening-Watch encodes [a] five digit numeric code to speech sound that is played by the browser. The five-digit code is randomly generated at the time of login by the web-server,” explained SPIES lab director Nitesh Saxena, associate professor of computer science at UAB and co-creator of the Listening-Watch concept. “If the system can extract at least four digits from both the browser and the watch recordings, the system accepts the login, otherwise, it rejects the login.”

The architecture of Listening-Watch. This figure shows an implementation of Listening-Watch using a smartwatch. The phone is not serving the role of the second factor—it is only used as a companion device. (Image: Courtesy of the University of Alabama at Birmingham, Security and Privacy In Emerging computing and networking Systems (SPIES) Lab)

A major key to the Listening-Watch system is the low-sensitivity but high-precision microphones that many smartwatches and wearable devices contain, not capable of picking up sounds from several feet away, but well-equipped to perform the speech recognition needed for Listening-Watch to work. This low microphone sensitivity helps to avoid one of the possible ways someone could try to bypass the 2FA—by having the smartwatch of a victim close by, perhaps targeting a coworker sitting in the same office or a roommate in the same apartment.

The Listening-Watch system is also designed to prevent remote attackers—unlike with Sound-Proof, the audio the system relies on is random, unpredictable and not easily replicated. Without the same five-digit code being played near the legitimate user’s smartwatch at the same time it is played by the browser, the system cannot authenticate the illegitimate user. And even if the audio could be played from just a few feet away, the researchers’ tests found that, at an average speaker volume level, only 1 percent of log-in attempts were accepted when the audio was played from over 50 cm (slightly less than 20 inches) away, while 100 percent were accepted if the audio was played from less than half a foot away. Such a small distance would hardly make an attacker’s attempts inconspicuous to the legitimate user, nor would an attempt by the attacker to blare their speakers at a high volume in order to make up for the distance. This latter tactic could further be avoided by setting limits on volume levels accepted by the system, the researchers write.

“Since wearables always remain close to the user, they are relatively easy to reach compared to smartphones,” Saxena says, explaining the benefits of using wearables for 2FA, beyond their low-sensitivity microphones. “Unlike traditional phone-based 2FA, wearable-based 2FA supports logins from the phone, which is a very common use case scenario.”

Wearables like smartwatches may not be as common as phones, however, a trending increase in their prevalence makes them a viable option for an effective 2FA component, the authors argue. Their paper notes that research firm Gartner, Inc. predicted that smartwatch sales would increase by 38.5 percent this year, and that by 2021, smartwatch sales would increase by 132.64 percent. Besides smartwatches, which may be on the more expensive side, the system could also work with less pricey wearables, such as fitness assistants or even wearables specifically made for the purposes of 2FA, the researchers say.

Potential additions to the system could include a longer, more secure code (such as a six- or seven-digit code); use of different randomly generated sounds other than a numeric code, which may be less distracting and more pleasant to listen to; and limitations to the volume of the audio played, which could further prevent log-ins by attackers who are in close proximity to the user. One potential challenge to the system is an increase in the sensitivity of the microphones of common wearables, although the researchers note this may be unlikely in the near future as there is no pressing need for wearables, which are designed to be close to the user and mainly pick up voice commands, to record faraway sounds. The authors also point out that even without the low sensitivity of the microphone, the system would still be more secure than something like Sound-Proof due to the unpredictability of the randomly generated audio.

Read the full paper here: