Advertisement
Machine learning expert Hyrum Anderson, technical director for data science at Endgame, will present research today in a briefing at the Black Hat USA 2017 cybersecurity conference showing how an artificially intelligent malware agent, using machine learning methods, could evade an artificially intelligent malware detection system by playing a "game" in which it tests the system to uncover its blind spots. (Photo: Courtesy of Endgame)

Editor’s Note: Welcome to my weekly column, Virtual Case Notes, in which I interview industry experts for their take on the latest cybersecurity situation. Each week I will take a look at a new case from the evolving realm of digital crime and digital forensics. For previous editions, please type “Virtual Case Notes” into the search bar at the top of the site.

Machine learning can be a powerful tool for defensive cybersecurity. This adaptive form of AI, which learns from experience how to detect threats, can use its acquired knowledge to intercept even new and never-before-seen strains of malware. This can be extremely important during a time when malware is constantly evolving and threat actors are finding new ways to bypass existing security blocks through a variety of exploits.

But the “good guys” defending their digital systems and the “bad guys” whose goal is to infect such systems are in a constant arms race for technological superiority. And as machine learning expert Hyrum Anderson, technical director for data science at Endgame, explains in new research presented today at the Black Hat USA 2017 cybersecurity conference, artificially intelligent defense systems could be vulnerable to an “evil twin” type threat—machine learning malware “factories.”

“What we’re developing and showing at Black Hat is—we can create a factory that learns how to churn out malware that sometimes can be very effective in defeating a particular defense,” Anderson said in an interview with Forensic Magazine. “I call it a factory but it’s an agent, it’s an artificially intelligent agent (…) You give him a piece of malware and he’s going to play a game against an anti-virus engine.”

The game goes like this: the machine-learning malware agent sends the defense system a piece of malware, and receives feedback as to whether that malware bypassed detection or not. The malware agent takes in this feedback and sends another piece of malware with different characteristics, a bit like changing into a different disguise to get past a bouncer, Anderson explained.

“You can think of the anti-virus agent as the bouncer and think about the artificially intelligent agent as trying to disguise the malware by dressing it in clothes,” he said. “So he tries one thing and the bouncer says ‘No, no, too young. Go away.’ He’ll learn that that didn’t work for this particular bouncer so he tries a different disguise.”

The malware agent will churn out several more pieces of malware with different “disguises” to send to the defense system, receiving feedback each time. After doing this a large number of times, the AI begins to get an idea of what’s getting through and what isn’t.

“After you’ve played this game, after he tries tens of thousands of times, he’ll begin to learn what the bouncer will let by—the subtle mistakes the bouncer’s making,” Anderson said.

These mistakes are due to blind spots in AI models where threats can slip through the cracks—no machine learning model is perfect, and in fact, machine learning models can sometimes be easy to trick. Anderson pointed to a study by other researchers in which image recognition AI was tricked into seeing a picture of a school bus as an ostrich by changing just a few key pixels in the image to correspond to what the AI would normally recognize as an ostrich.

Similarly, by changing the characteristics of its malware—testing out different pieces of “clothing” for its “disguise”—AI agents used by hackers could trick defensive AI into seeing a dangerous virus as a benign communication.

In a black box scenario—in which the attacker doesn’t have any knowledge about the structure of a defense system, and can only receive success/failure feedback or sometimes a “score” for how dangerous it was perceived to be—the game is trial and error, and the malware engine will only “win” the game about five times out of 100 tries, Anderson said. But this doesn’t make it an insignificant threat.

“It only takes one attack vector for an attacker to be successful,” Anderson pointed out. “The defense has to be right all of the time and the offense only needs to be right once.”

So what can defenders do to make their AI models less vulnerable to artificially intelligent attackers? Anderson said—training the bouncer to recognize the most successful types of disguises.

“I can go tell my engineers, ‘You are especially susceptible to this kind of evasion. If you see sunglasses you’re more likely to let somebody in. That’s wrong—you need to fix that,’” Anderson explained. “Not only can we directly feed the samples to the machine learning model, but we can also summarize and feed that back to the human engineers to help them design better systems.”

Anderson stressed that machine learning is still a powerful form of defense, and that his goal is not to spread doubt about its cybersecurity potential. Instead, he wants to make fellow cybersecurity professionals and others aware that there is room for improvement, and that machine learning is not a cure-all to be relied upon absent of other precautions.

“Machine learning is more robust than signature-based things that we’ve had in the past. However, there’s a danger in hanging your head on it,” he said. “The point of the research is that machine learning is not a silver bullet for everything, and we can break it too.”

Along with his white paper “Evading Machine Learning Malware Detection,” co-authored by Bobby Filar and Phil Roth from Endgame and Anant Kharkar from the University of Virginia, Anderson will be publishing an OpenAI gym used in conducting the research—a standardized virtual environment in which one can train an anti-malware engine against a malware agent.

Advertisement
Advertisement