Penetration testing

The future of Red Team operations

Howard Poston
October 31, 2019 by
Howard Poston

Introduction

The Red Team assessment is an increasingly popular method for an organization to get a realistic feel for their overall security. Organizations’ attack surfaces are large and constantly growing, so the ability to identify the vulnerabilities most likely to be targeted by an attacker can be invaluable.

In the future, Red Teaming is likely to only grow in popularity, but other changes can be more difficult to predict. However, there are a few trends that are likely to shape the future of Red Team assessments.

What should you learn next?

What should you learn next?

From SOC Analyst to Secure Coder to Security Manager — our team of experts has 12 free training plans to help you hit your goals. Get your free copy now.


Driven by regulations

In recent years, the regulatory landscape has expanded dramatically. The threat of data breaches has driven governments to pass new regulations designed to protect the sensitive information of their constituents that has been entrusted to or collected by corporations.

The most famous of these new regulations is the EU’s General Data Protection Regulation (GDPR), which protects the data of EU citizens regardless of where the company collecting the data is located. However, this is not the only data protection in existence. Many countries and US states have passed their own data privacy laws, and existing laws like PCI-DSS, SOX and HIPAA are still in effect.

It seems likely that future Red Team assessments will be driven by the need to demonstrate compliance with applicable regulations and standards. Some regulations require regular testing, and all of them levy fines for failing to demonstrate the ability to adequately protect customer data. When the cost of non-compliance outweighs the price of comprehensive security testing, Red Team engagements, especially those with a compliance focus, will likely become an even more popular way of testing an organization’s security posture.

ML-enhanced engagements

The goal of a Red Team assessment is to accurately simulate how an organization would be attacked in order to identify vulnerabilities that are likely to be detected and exploited. This typically involves the Red Team following a procedure that mixes structure (to ensure that the assessment is comprehensive) with flexibility (to adapt to the customer’s unique environment).

However, many of the activities performed by the Red Team are repetitive and are based upon responding to information collected by the previous test or action. These types of activities are ideally suited to automation, where a testing tool knows the logical “next” step based upon the previous response.

As machine learning and artificial intelligence mature, this likely will mean that they will be used extensively in Red Team operations. An ML- or AI-based testing system can provide scale to the Red Team and focus human analysts on the attack vectors most likely to bear fruit. 

On the defender’s side, AI and ML-based systems can learn the features that indicate a possible attack. This machine-versus-machine system can dramatically increase the speed, scope and effectiveness of a Red Team assessment and allow the customer to rapidly improve their security through short cycles of attack, response and retrospectives.

Intelligence-driven assessments

Currently, the best Red Teams tailor the tools and tactics used in their assessment to the customer. The size of the customer’s attack surface and the sheer number of possible attackers and attack vectors mean that it is impossible for a Red Team to try to identify every possible vulnerability in an organization’s security landscape. However, by focusing on the vulnerabilities most likely to be exploited, the Red Team can make a measurable difference in the customer’s ability to resist an attack during an assessment.

In the future, Red Team assessments are likely to be much more focused on the potential attacks that an organization can expect to experience. A massive amount of data is available about known vulnerabilities, common attacks, hacking groups (and their tools, techniques and targets) and other features of the cybersecurity threat landscape. By aggregating and analyzing this data, organizations can identify the type of attack that they are most likely to experience and the probable target of that attack.

Using this information, the Red Team can design assessments to provide maximum impact to the customer’s security. While some breadth of coverage is always a good idea in case this analysis is important, a focus on the most likely attack vectors ensures that the vulnerabilities with the highest probability of exploitation are identified and remediated.

Human-focused assessments

Including social engineering attacks in a Red Team assessment can be a hard sell with some customers. If not managed properly, a social engineering assessment can backfire on the organization if employees feel that management is trying to trick them into being caught acting in an insecure manner. As a result, social engineering attacks are often out of scope of Red Team engagements.

However, the reality of the current cybersecurity threat landscape is that the human is the target of most cyberattacks. Over 99% of cyberattacks need some form of human interaction to succeed. As customers accept the importance of testing their staff for human vulnerabilities, Red Team assessments will become more human-focused.

Conclusion: Preparing for the future

Red Team assessments are designed to help an organization to understand and remediate the vulnerabilities in their attack surface in a realistic way. By acting like a real-world attacker, the Red Team identifies the vulnerabilities that an attacker is most likely to discover and exploit. The customer can then leverage the Red Team’s knowledge and experience to mitigate these vulnerabilities in a way that maximizes their resiliency to future attacks.

All aspects of a Red Team engagement are driven by the needs of the customer. As data protection regulations and standards become a more significant consideration for organizations, Red Team assessments will become even more focused on demonstrating the necessary level of compliance. As organizations’ attack surfaces grow and technology evolves, the face of the Red Team assessment will change with them. 

While the details of the future of Red Teaming may be uncertain, they will almost certainly stick around and even grow in popularity as organizations try to prepare for and combat the cyberthreat.

What should you learn next?

What should you learn next?

From SOC Analyst to Secure Coder to Security Manager — our team of experts has 12 free training plans to help you hit your goals. Get your free copy now.

 

Sources

  1. Cyber security and the growing role of red teaming, ITProPortal
  2. The future of red teaming: Computer robots face off in adversarial rounds, CSO
  3. Red Team Supply Chain Attacks in Modern Software Development Environments, Praetorian
  4. More than 99 Percent of Cyberattacks Need Humans to Click, Security Magazine
Howard Poston
Howard Poston

Howard Poston is a copywriter, author, and course developer with experience in cybersecurity and blockchain security, cryptography, and malware analysis. He has an MS in Cyber Operations, a decade of experience in cybersecurity, and over five years of experience as a freelance consultant providing training and content creation for cyber and blockchain security. He is also the creator of over a dozen cybersecurity courses, has authored two books, and has spoken at numerous cybersecurity conferences. He can be reached by email at howard@howardposton.com or via his website at https://www.howardposton.com.