Artificial Intelligence Risk: Get Ready for AI-Powered Malware

IBM’s DeepLocker PoC gives the industry a look at artificial intelligence risk–and what an attack produced with the help of deep neural networks will look like.

 … According to Chris Gonsalves, director of research at The 2112 Group, IBM’s DeepLocker project is worth talking about, but doesn’t have a lot of practical impact right now. A cybercriminal building something like DeepLocker would have to put in considerable effort to build stealth around conventional malware to target a single victim with well-known attributes needed to inform a neural network.

“That’s nation-state level tradecraft,” Gonsalves told ITPro Today. “If I ran a nuclear enrichment facility in Iran or North Korea, I’d be pretty breathless and sweaty about this. For every other CISO out there battling conventional attacks with tight budgets and short staffs, this is going to be pretty far down the priority list.”

In addition, “far from silently targeting simple individuals with complex weaponry and obfuscation, today’s criminals are mostly about hitting as many victims as possible and separating them from their Bitcoin as quickly as possible before their C2 [command-and-control server] gets shut down,” he said.

The type of attack seen with DeepLocker is just one of the ways AI can be leveraged by threat actors, particularly against such targets as CFOs or other privileged users, where a lot of information–such as user names, biometrics, device profiles and system configurations–is known. That information can be used to train the DNN. Such malware needs to be put into an innocuous app, and Gonsalves said such attacks will likely show up in the supply chain where the code base of a trusted application or service is compromised.

“Much different than DeepLocker, the other kind of AI security issue arises from the malicious feeding of bad inputs into neural nets in order to get AI systems to misinterpret and mishandle them, something known as adversarial machine learning,” the analyst said. “Think about bombarding the neural network at the heart of a self-driving car with a bunch of inputs that indicate that a red light is actually green or that a highway exit sign is a stop sign. It’s not something that’s top of mind for IT folks right now, but as the number of predictive algorithms used in everything from retail to healthcare grows, this flavor of AI threat bears watching, as well.”

Another area that bears watching as the issue (and prevalence) of AI and malware grows is the deception technology space, where the technique of honeypots has evolved into sophisticated defense platforms. In this arena, vendors like Cymmetria, Attivio and Illusive will be key in figuring out how to detect the new kinds of behavior demonstrated by DeepLocker, Gonsalves added.

The work IBM has done around DeepLocker is an example of researchers imagining what upcoming attacks will look like, an important step in improving defenses, Gonsalves said. However, he cautioned that such future-looking analysis shouldn’t stop the work organizations are doing to protect systems against current threats. For example, application whitelisting–including creating comprehensive asset inventories and gleaning basic insights into application behavior–could help in stopping such threats as DeepLocker.

“Unless and until a CISO has implemented things like judicious network segmentation, privileged account management, configuration and change controls, and data classification, then any conversation about the future of AI-powered cyberweapons is pretty much a distraction and a fool’s errand,” he said. …

Written by Jeffrey Burt

> Read the full article at itprotoday.com.