SEPTEMBER 15, 2019

The Advanced Threat Potential of Deep Learning

The objectives of Artificial Intelligence are to enhance the ability of machines to process copious amounts of data and automate a broad range of task

The objectives of Artificial Intelligence are to enhance the ability of machines to process copious amounts of data and automate a broad range of tasks. The 1950’s saw the earliest forms of AI, where techniques were developed that enabled computers to mimic human behavior. This gradually evolved into the machine learning of the 1980’s where AI techniques gave computers the ability to learn without being explicitly programmed to do so. More recent years have explored a new subset of machine learning, called ‘Deep Learning’ or Artificial Neural Networks where computers are able to process and assimilate new data, in a similar manner to the human brain.

There are numerous real-world applications of Deep Learning. The more commonly known areas are autonomous self-driving cars, image recognition as used by Facebook and Amazon and Natural Language Processing as used by Google and Apple. Yet, for the most part, the field of Deep learning is still in its infancy, where the range of tasks to be transformed by AI are still to be realized, and so are the potential threats.

We explore the expanded threat potential that has been ushered in by Deep Learning technology and how it has further expanded the scope of threats compared to previous computational capabilities. “With great power comes great responsibility” and with that in mind, we finally consider the moral implications.

 

Amplified Capability of AI

The original motivation towards developing AI technology was to create a system that can perform any given task faster and more accurately than a human. This objective has been met. Human-level performance is no longer the highest standard in which a task can be achieved. For example, the steady progress of AI has made it better at playing chess then even top performing players. On May 11, 1997, DeepBlue an IBM computer beat the world chess champion after a six-game match. The game’s combination of simple rules and multiple challenging problems made chess perfect for such an experiment.

The amplified efficiency of AI means that once a system is trained and deployed, malicious AI can attack a far greater number of devices and networks more quickly and cheaply than a malevolent human actor. Given sufficient computing power, an AI system can complete the one task in many more instances. This increased scalability is demonstrated by Google’s or Facebook’s facial recognition algorithms. If Google’s image net model can classify a million images in less than a few minutes, a human being, no matter how fast, is no competition.

 

The Democratization of AI

Access to software and new scientific findings is relatively easy. The AI industry has a culture of openness, where papers are published with source codes and AI developments often get reproduced in a matter of weeks, if not days. Thus, in the cat and mouse race between the good guys and the bad guys, much of the time they’re equally advancing the threat landscape in the share of information. For example, both have been known to publish newly discovered malware in the same communal forums. However, it’s not just the knowledge diffusion happening in AI that is potentially expanding cyber threats, programs in robotics and the declining cost of hardware is making technology more accessible to malicious actors.

As AI knowledge and systems become more prolific there is an expectation that it will not only expand the pool of actors who are equipped to carry out an attack but the frequency of committing attacks and the range of possible targets. The combination of knowledge diffusion and scalability means that more malicious actors can enter the field, where once having obtained the now lower threshold of knowledge and resources they can scale up the veracity and frequency of attacks. This makes their motivation and effort far more worthwhile, both in terms of a cost-benefit analysis and in the prioritization of numerous targets, be they based on remuneration or devastation.

 

IBM’s DeepLocker

A demonstration of this expanded threat potential was made by IBM at the 2018 Black Hat event where they showed the potential capability and impact of deep learning-based ransomware, Deep Locker. The attack comprised a face recognition algorithm that autonomously decided which computer to attack with its encrypted ransomware. The highly targeted and evasive attack tool hides in other applications until it identifies a target victim. Once the unfortunate target is identified through various indicators, the AI algorithm will ‘unlock’ the malware and launch the attack.  The identifying criteria can be any number of attributes including facial feature recognition, audio, location or system-level features.

To test the attack Big Blue IBM hid the ransomware in a video conferencing application where they knew it wasn’t going to be detected and trained the AI model to unlock it based on facial recognition. When the deep neural network recognized the correct facial features of their target in front of their PC, via the webcam, the target’s face became the necessary key to open the payload and lock down their operating system.

The revelation of DeepLocker reverberated strongly across the cybersecurity industry for a couple of reasons. Firstly, it demonstrated the expanded range of triggers in which a ransom attack can be buried, making it difficult to reverse engineer the ransomware attack that is buried within the deep neural network. While there are indeed multiple triggers in which to embed the malicious business logic, the reverse engineer can simply skip this investigative stage to identify how the trigger was used, rather than how it worked. The difference appears niched, but it’s significant. It’s the relative difference for example, of knowing how electricity works and knowing how to just use it.

The second and ultimately more pressing concern was the method, whereby the AI malware was part of the business logic. The real breakthrough was that the attack was AI triggered, rather than triggered by a human being. Marc Stoecklin, Principal RSM and Manager at IBM explained it this way, "the deep convolutional network of the AI model is able to convert the concealed trigger condition itself into a ‘password' or ‘key' that is required to unlock the attack payload." In other words, the AI algorithm was able to not only identify but also trigger the attack against a potential target.

 

Enlarged Psychological Distance

Cyberattacks have always been characterized by psychological distance and anonymity. The malicious actor never comes face to face with their targets, nor sees the impact of what they have unleashed. Sitting in an enclosed room, opposite a computer throughout an entire attack, there is often a lost sense of the full devastation caused.

AI can also facilitate an even greater degree of psychological distance from the people that are impacted. In the case of an autonomous offensive weapon, such as the terminator, a perpetrator can avoid being present at the scene of the crime and avoid having to see their traumatized victim. The attacker may release on the public numerous terminators in numerous locations around the world, each with variant tasks and each with their own violent capabilities.

Autonomous AI-driven offensive weapons are to be found well beyond the scope of science fiction.  AI-based autonomous weaponry is currently being developed in Russia, USA, UK, China, and Israel among other locations. In the case of an AI-driven weapon, there is no need to control and supervise it, an attacker could launch as many attacks as they want with just a hand full of people, be more selective in their target and more devastating in their impact. Under these circumstances it would be possible to have, let’s say, 10 people, launching 50 million weapons, causing mass destruction on the same level as a large nuclear explosion.

 

The Moral Implication

The knowledge of how to design and implement AI systems can be applied to both civilian and military purposes and likewise towards beneficial and harmful ends. In the same way that human intelligence can be used towards positive, benign or detrimental purposes, so can artificial intelligence. Many tasks that lend themselves to be automated, have this multifarious aspect. Consider systems such as Metasploit that examine software for vulnerabilities, this knowledge can be used to either rectify or exploit the vulnerability. Likewise, the difference between a drone used for delivering packages and a drone used to deliver explosives is not all that different.

How we, as a global community, choose to expand the AI frontier will become critical. Even when knowledge is pursued for wholesome purposes, there is no guarantee that its end application will maintain that same wholesome outcome. There have been many to call for regulatory control to help reign in its expanding development. Including Elon Musk, founder of Tesla who said “we need to be careful with AI. Potentially more dangerous than nukes… I’m increasingly inclined to think there should be some regulatory oversight (of AI), maybe at the national and international level”.

However, others feel that attempts to regulate ‘AI’ in general would be misguided. After all, how do you regulate something that doesn’t have any one clear definition? And where the risks and considerations are vastly different depending on the domain.

Possessing undeniably good and bad potential in equal parts, there is ultimately no way to control the global exploration and application of AI. Its threats have to be treated as a potential danger, in the same pragmatic approach that you protect your PC with antivirus software to help block dangers. The critical factor is having a defense system in place that is more technologically advanced than the impending threat. Enterprises most susceptible to an advanced AI attack must have in place the necessary technology that is able to mitigate the developing risks of AI.

 

 

To learn more about Deep Learning, how it works and how it differentiates from machine learning, download the Ebook Deep Learning for Dummies