MAY 19, 2019

How AI can be used for Malicious Purposes

In recent years, deep learning and machine learning have gained traction in so many areas that have a direct positive effect on our lives as well as c

In recent years, deep learning and machine learning have gained traction in so many areas that have a direct positive effect on our lives as well as complex tasks such as computer vision (image recognition), machine translation, and natural language processing. And with like so many other technologies that are changing our lives for good, it has the destructive potential to change it for bad, there is no reason why it won’t also be used for malicious activities as well. Up until now, we haven’t seen the use of AI for malicious activities in cybersecurity due to the high costs, lack of skills and the tools available. But just like any other technology, it’s a matter of time before it happens in cybersecurity.

AI vs. AI

Think about what would happen when attackers start using the power of deep learning and machine learning for their advantage?

That being said, currently, the use of AI for attackers is mainly being used in academia and not in practical attacks.

But there’s a lot of talk in the industry about attackers using AI in their malicious efforts, and defenders using AI as a defense technology. And we’re here to make sense of this.

There are three types of attacks in which an attacker can use AI:

  1. AI-boosted/based cyber-attacks – In this case, the malware operates AI algorithms as an integral part of its business logic. For example, using AI based anomaly detection algorithms to indicate irregular user and system activity patterns. Unusual patterns can lead to different malware behavior, increased or decreased evasion and stealth configurations, and communication times. Situational awareness is implemented in malware for a long time, but AI can offer much more accurate and adaptive approaches.

An interesting use case can be found in DeepLocker, presented by IBM Security researches in Black Hat USA 2018. DeepLocker is encrypted ransomware which autonomously decides which computer to attack based on a face recognition algorithm - meaning only when the target is recognized by the camera (after using face recognition techniques to identify) the attack takes place.

There are other hypothetical use cases, which might be a part of malware business logic. Consider “Anti-VM”, for instance. Sophisticated malware tends to check if it runs on a virtual machine (VM), to avoid operating its malicious activities on sandboxes, which will reveal the file is malicious, or to avoid being analyzed by a security researcher, which might reveal how it works. In-order to assist their Anti-VM efforts, malware writers can train a VM environment classifier, that would get environment details (e.g., registry keys, loaded drivers, etc.) as features and understand whether the host the malware is running on is a VM or not. Moreover, such a model can resolve some of the difficulties malware have when they run on cloud hosts, which are also VMs, but not security research-oriented VMs, increasing the malware spread.

 

  1. AI-based attack infrastructure and frameworks – in this case, the malicious code and malware running on the victim’s machine do not include AI algorithms, however, AI is used elsewhere in the attackers' environment and infrastructure – on the server-side, in the malware creation process etc.

For instance, info-stealer malware uploads a lot of personal information to the C&C server, which then runs an NLP algorithm to cluster and classify parts of the information as “interesting” (credit card numbers, passwords, confidential documents, etc.).

Another example for this would be #TheFappening attack, where celebrity photos stored on iCloud were leaked. An attack like this one could have taken place in a much larger scale if it was an AI facilitated attack. For instance, computer vision machine-based algorithms could be used to review millions of pictures and identify which of them contains celebrities and then expose only the matching ones, similar to the ones leaked in #TheFappening.

Another example for an AI-facilitated cyber-attack can be a spear-phishing attack, as described in the report: The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. In phishing techniques, the target is “fooled” by a superficial trustworthy façade tempting them to expose sensitive information/money, whereas a spear-phishing attack involves collecting and using the information specifically relevant to the target, causing the façade to look even more trustworthy and more relevant. The most advanced spear-phishing attacks require a significant amount of skilled labor, as the attacker must identify suitably high-value targets, research these targets’ social and professional networks, and then generate messages that are plausible within this context. Using AI – and specifically generative NLP models, this can be done at a much larger scale, and in an autonomous way.

 

  1. Adversarial attacks – In this case, we use “malicious” AI algorithms to subvert the functionality of “benign” AI algorithms. This is done by using the same algorithms and techniques used in traditional machine learning, but this time it’s used to “break” or “reverse-engineer” the algorithm(s) of security products. For instance, Stochastic Gradient Descent which is a technique used to train deep learning models that can be used by adversaries to generate samples that are misclassified by machine learning or deep learning algorithms.

One example of adversarial learning is placing a sticker in a strategic position on a stop sign, causing it to be miss-classified by an image recognition street sign classifier as a speed limit sign. Another example of this attack is injecting malicious data streams into benign traffic in order to cause an anomaly detection-based network intrusion detection system (NIDS) to block legitimate traffic effectively causing Distributed Denial of Service attacks (DDoS).

Such attacking techniques have been developed by researchers against computer vision algorithms, NLP and malware classifiers.

Brace Yourself!

We believe the AI vs. AI trend will continue to increase and cross the border from academic POCs to actual full-scale attacks as computing powers (GPUs) and deep learning algorithms become more and more available to the public.

In order to have the best defense, you need to know how attackers operate. Machine learning and deep learning experts need to be familiar with these techniques in order to build a robust system against them. Keep up with the AI threat potential landscape and watch this on-demand webinar on the evolution of modern cyber threats.