Tips on entering blue teaming, red teaming or purple teaming

Snehal Antani joins us from Horizon3.ai to talk about pentesting, red teaming and why not every vulnerability necessarily needs to be patched. He also shares some great advice for people entering the field.

– Get your FREE cybersecurity training resources: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

  • 0:00 - Intro
  • 2:12 - Origin story
  • 4:12 - Using your hacking powers for good
  • 7:14 - Working up the IBM ranks
  • 12:18 - Cloud problems
  • 14:25 - Post-IBM days
  • 16:50 - Work with the DOD
  • 20:33 - Why did you begin Horizon3.ai?
  • 24:38 - Vulnerabilities: not always exploitable
  • 29:46 - Strategies to deal with vulnerabilities
  • 33:36 - Sensible use of a security team
  • 35:29 - Advice for red and blue team collaboration
  • 39:14 - Pentesting and red teaming career tips
  • 41:12 - Demystifying red and blue team
  • 45:40 - How do you become intensely into your work
  • 47:24 - First steps to get on your career path
  • 49:49 - How to learn more about Horizon3.ai
  • 50:42 - Outro

[00:00:54] Chris Sienko: Welcome to this week's episode of the Cyber Work with Infosec podcast. Each week we talk with a different industry thought leader about cybersecurity trends, the way those trends affect the work of infosec professionals, and offer tips for breaking in or moving the ladder in the cybersecurity industry. Snehal Antani is an entrepreneur, a technologist and investor. He is the CEO and cofounder of horizon3.ai, a cybersecurity company using AI to deliver red teaming and penetration testing as a service. He also serves as a highly qualified expert for the US Department of Defense, driving digital transformation and data initiatives in support of special operations. Prior to his current roles, Snehal was CTO and SVP at Splunk, held multiple CIO roles at GE Capital, and started his career as a software engineer at IBM. He has a master's in computer science from Rensselaer Polytechnic University, a BS in computer science from Purdue, and holds 16 patents.

So I heard about Snehal from one of his representatives who said that he wanted to talk about pentesting, and specifically why vulnerabilities don't necessarily mean exploitable vulnerabilities, which I think is a good distinction to make, one that I've not heard before. So I'm looking forward to getting into it. Snehal, welcome to Cyber Work.

[00:02:09] Snehal Antani: Excellent. Thank you for the invite. I appreciate it.

[00:02:11] CS: So we'd like to start by getting the story of our guest’s cybersecurity journey in their own words. And looking for your bio, it's pretty clear that tech and computers have been in your blood since at least time you were in college where you got your BS, and then your MS in computer science. So what was the original appeal of computers in tech and where and when did the emphasis specifically on security take hold?

[00:02:31] SA: Yeah, it's a great question. And actually my to technology was way before college. My dad, who's an electrical engineer, taught me how to hack hardware and solder together flashlights and other kinds of home electronics. There're all sorts of burns stains in the carpet in the house I grew up in for me getting excited, getting a piece of tech to work and then dropping the soldering iron and running to tell him what was up. And then from that, that was six years old. When I was nine years old, I had a 386 Windows, or a DOS machine. And my dad gave me a book on how to write code in Basic. And my first program was let A equal 1, let B equal 2, let C equal A plus B. Print C. And it was just an addiction at that point.

What's funny on the hacking side is from that time, this is dial-up modems, bulletin board services, I stumbled upon The Anarchist Cookbook. And once you find The Anarchist Cookbook, it puts you down a path of finding hacker tools. So how do you run phishing attacks in chat rooms in AOL? How do you crack passwords? How do you all that kind of stuff? So I very much at a very early age got exposed to these things. And thankfully, I stayed on the right side of the law, but it was very easy. We have access to those tools to quickly drift across the line and do things that you’re not supposed to be doing.

[00:03:52] CS: Yeah, I love – I've had a few people who I've asked about that. One of my previous guests, Alyssa Knight, said that she was escorted off of her high school grounds by secret service because she got a little too inquisitive with the school's mainframe. Or actually, I think it was like a local like military organizations mainframe. But can you talk about like the thought process? When did you realize that there was a choice to be made in terms of using these powers for good or bad? And was there like a specific thing where you saw someone go to jail? Or did you just naturally say, “Oh, I don't want to do that.”

[00:04:27] SA: Yeah, it’s a good question. I can't remember what terrified me more. I think my dad had had printed out a news article on a hacker kid, I forget who it was, getting busted and getting thrown in jail. And then he cut out the part where that same kid got recruited by the NSA. So that was like details, which I learned later when I reread the story. So there's a little bit of that. There’s a little bit about kind of knowing what was right and wrong. So for instance, learning how to phish. So there's an interesting tool I remember in AOL where in a couple of clicks you could phish an entire chat room. And in that chat room, I didn't know what I was doing. I hit go and suddenly it's phishing everybody to say, “Hey, we're AOL administrators. Your password has an issue. Please put your login and password here.” And then a bunch of people said we're going to report you, and a bunch of people were like, “Oh, here's my password.” Maybe a 50-50. And when they started saying we're going to report you, because I didn't know what I was doing, that's what really scared me. I’m like, “Holy crap. This is actually a big deal.”

[00:05:27] CS: Yeah, people can see you now. Yeah.

[00:05:30] SA: Yeah. And once again, I was 10 years old at the time when I was learning this stuff. And so that kind kept me on the right side of the line.

[00:05:37] CS: Basically a new toy that you just turned on and it like just sort of like ran rampant over the neighborhood until people started looking at you.

[00:05:43] SA: Yeah. And then I remember like finding another forum where this tool that I downloaded was available. And I started asking questions like, “How do I use this?” And then I got punted off of AOL by one of the other like hackers. And then my machine would not stop blue screening. So that's when I realized that this is actually quite powerful.

[00:06:01] CS: Yeah, playing with some dark hearts there that you didn't want to replicate.

[00:06:07] SA: That's exactly it. But it's funny. I still pull out – I draw on those experiences even today. When you talk about how, even at 10 years old, I could in three clicks run a phishing campaign against a chat room of 5000 people. It is no different than the mechanisms. I mean, the tools are different, but the techniques are no different than what you see today in the industry.

[00:06:27] CS: Right. Yeah. Especially you say with the sort of phishing as a service. And the kids that they're getting are probably not dissimilar from the tool that you picked up back then.

[00:06:37] SA: Yeah. In fact, when I was a CIO, we used to do internal phishing campaigns to assess our posture. The most successful phishing campaign we ran was sending out email saying, “We detected you accessed adult material from your work laptop. Please click here to explain why.” We had like a 90% Click rate, everything ranging from, “I'm usually very careful,” to “I don't understand how this happened,” to, “I'm offended. There's no way this is possible, unless my son –”

[00:07:06] CS: But everybody just had to weigh-in on it at least. Wow.

[00:07:12] SA: That's exactly it.

[00:07:13] CS: Amazing. So from 2000 through 2012, you worked your way up the ranks at IBM starting as an intern for the WebSphere Application Server for z/OS. Moving through the ranks and accruing experience up to senior product manager and business strategy lead for WebSphere Cloud, and virtualization technologies, all the way up to emerging technologies lead architect, strategist and project manager. So that's an enviable tangent with one company in an industry where a lot of people don't spend more than a few years in any one place. Could you talk about some of the skills you learned and some of the high-pressure projects or assignments that helped you learn and grow in your toolset? Do you have any memories of a time where you did a certain piece of work in that things and said, “I really just leveled up my skills right now.”

[00:07:54] SA: Oh, absolutely. In fact, those were incredibly important years for me from a professional standpoint. So when you go through college, you take a bunch of classes, you're with your friends, and you learn a bunch of skills, but you don't really know how to apply them. At least I didn't really know how to apply them. So I show up as a wide-eyed intern in Poughkeepsie, New York in 2000. And this is right around the.com crash. So all the jobs in Silicon Valley were gone. And at the time, this is pre-Google, pre-Facebook. I mean, the hot companies work for were Sun Microsystems, Oracle, IBM, were some of the more sought after companies to go to go work for. And so I was able to land a gig at IBM, ended up in Poughkeepsie, New York on the mainframe.

And at first glance, it's like, “Why on earth would you work on the mainframe?” And what I realized early in that internship is the 40 years of engineering that went into that platform really would be a great learning place for me to understand how computing works, how virtualization works. Like all these emerging concepts. And there's this quote later I heard, which was the difference between those that say cloud and those that say mainframe is the year they were born. And a lot of the problems we ended up solving on the mainframe side were quite relevant to many of the problems that we see running cloud at scale.

So with that backdrop, I show up and I don't really know what I'm doing. And I'm writing tools. And I remember destroying the production environment multiple times, because I was unchecked as a wide-eyed recent grad and so on. But my focus early in my career was technical credibility. So how do I build that technical depth and breadth? And I didn't care about salary. I didn't care about title. I cared about working around people and for people that inspired me that I could learn from. And there was – We call them the Fab Five. They’re these five mainframe architects that were legendary in their areas, and I got to work underneath each of them. And so just keeping my mouth shut and absorbing as much as I could were really the early part of my IBM days. And I'll pause there and I'll tell you this funny story about how that became a catalyst, but any follow-up thoughts and questions before I go on an even cooler story?

[00:10:05] CS: Please. No, by all means.

[00:10:07] SA: Yeah. So here I am, I'm 25 years old, three years into the company. By far, a nobody. Pretty low level. And Steve Mills, who was the senior vice president of software at the time, sends out this email saying, “Hey, we've got this customer in Germany whose mainframe is constantly crashing. And nobody can figure it out. Does anybody have the skills to do it?” And this email got blasted out, worked its way, and it came into my inbox. And I look at it and I’m like, “Hey, I think I know this area. I think I could do it.” I skipped seven layers of management and replied directly back to Steve and said, “Steve – Blah-blah. Snehal Antani. What's your mainframe? Poughkeepsie, on the dev side. I think I can solve this problem,” and essentially put me in the game coach to the – All of my layers of management were suddenly terrified, because who is this young punk kid that's going to volunteer. And Mills mill says, “Go for it.” And the next day, I'm on a flight to Germany, reading every book. And once again, like I kind of know enough, but I don't really know what I'm doing. And I show up to Germany, Munster. I'm sitting in front of the customer and they were explaining the problem. And I have no idea what they're talking about.

[00:11:18] CS: Oh no.

[00:11:18] SA: No idea what they're talking about. And I'm like, “Crap, I am going to get fired.” And in the meantime, on that 10-hour flight, all I did was read every book, every paper, everything I could find just to keep up. And then they said something in their description of the problem that sounded familiar. And then they said something else that sound familiar. And suddenly, I remember this moment, where everything in the world clicked. Everything made sense to me. Every aspect of enterprise software, multi-threaded server code, blah, blah, blah, everything made sense. And about 90 minutes later, I figured out the problem, solved it, and we had the systems up and running.

And what I realized is that you need to have this high pressure moment, multiple moments that force everything in your brain to click. And if you don't have it, you'll meander along. But for me, personally, it was that that moment of ultra-high pressure that forced everything in my brain to suddenly come into place.

[00:12:18] CS: That's fantastic. I love that. I do want to jump back to something you said in your previous part about how you're seeing a lot of the same problems in cloud that you previously did in mainframe. Can you sort of give me some examples of some of the problems that have transliterated across there?

[00:12:34] SA: Yeah, absolutely. So when you think about a mainframe, cutaway the green screens and all that detail for a moment. And what you effectively have is a shared computing infrastructure with hundreds or thousands of cores, tons of storage, tons of memory, optimized networking between them, local networking, multi system networking, and it's all shared resources. And so that single CPU is serving your workloads and it's serving somebody else's. So if you share a CPU and you start to burn a lot of CPU because your code isn't very efficient, you are going to starve out somebody else's work. So then you've got to layer in service level agreements and qualities of service. You start to isolate workloads to make sure that, “Hey, this particular batch job we're calculating interest is really important. And it must not take longer than two hours. And that deadline is going to dynamically allocate resources differently than, say, a credit card application that can allow some level of variation in service level agreements and response times.

And now you get into how do I apply service level agreements on top of shared computing? How do I then isolate workloads? Or if one system has a problem, the next one can't get affected? And a lot of these are the problems that you see in large scale cloud environments. How do I define my workloads? How do I isolate those workloads? How do I define service level agreements and priorities for one worker versus another? Think of the most basic version as like an extra-large Amazon instance versus spot instances, right? That's an extreme example of service level agreements applied to different types of workloads. And then the costs that come in. So having exposure to this early was fantastic, because I got to really understand the challenges that exist in highly shared environments.

[00:14:24] CS: Nice. I love that. So moving on to that, you spent two and a half years working as CTO and then SVP and general manager business analytics and IoT at Splunk. We talk about Splunk here a lot and stuff. What are some types of work that you did there? And were there specifically different things that you picked up during that time that built on your IBM knowledge?

[00:14:44] SA: Yeah. Actually, before that, I was at GE Capital. So I go through this my time at IBM, and I'm an engineer, I'm a developer, I'm an architect. And then I was in the field. So my job for four years at IBM was being a troubleshooter. So whenever a bank with a mainframe had a problem, I was the person that got phoned in and flew in to go troubleshoot it. And so I had this for years of constantly being under tremendous pressure to solve problems I didn't really know. And so I ended up becoming a very good learn it at all, right? So fast learner, trying to figure out what was going on. And I built this reputation within the banking industry that got me teed up to go be a CIO at GE Capital.

And with GE, I had zero management experience. And so they took this big bet on me of, “Hey, look, you've got the technology depth and breadth. You're able to effectively communicate. You can think about business and all these other things. We will teach you how to be a good manager and leader. You bring that technologist DNA to us as GE Capital.” And it was a great experience. And at the end it's how do I use tech to drive revenue and working with this incredible team out of places you don't think about is having strong tech talent? I mean, Danbury, Connecticut, that some of the folks I work within Danbury, Connecticut are way better engineers that I'd hire way before I hired some the folks I've run into in Silicon Valley. And it's because of domain repetition, all sorts of factors that are different. But in that time, I was a Splunk customer. I got invited to keynotes and talks in Splunk. And Godfrey and I built a relationship. And then he recruited me over to work as a Splunk CTO when GE Capital got divested from the rest of GE. So that's kind of the story in between. I said, I learned a lot about organizational change, digital transformation, and actually had to do it multiple times as a CIO, including security and other things. So I really come from this practitioner background of having to understand and apply security and be accountable for reporting on security in that CIO role. And that helped me become, I think, a much better Splunker in the following job.

[00:16:50] CS: So what kind of work did you do as a highly qualified expert with the Department of Defense? It's an impressive title, by the way.

[00:16:57] SA: Yeah, it's a hiring authority. And the Department of Defense has this great program where – So in my time at Splunk, I was on the road constantly as the CTO in two hats. The first hat was helping our largest customers apply strong Splunk in a strategic manner, right? So not just solving one problem, but adoption of Splunk as a platform across the entire company. The other part of my role at Splunk, though, was helping Splunk to enter new use cases in markets beyond just IT and security. So think of being able to have Splunk compete with Palantir, or have Splunk competing with ThoughtSpot for a business analytic standpoint, and so on.

Ultimately, Godfrey was really big on moving Splunk into those emerging use cases. When he swapped out and the new CEO came in, there was a difference in strategy where they wanted to double down on IT and security. And you haven't really seen Splunk move into those adjacencies yet. But I learned a lot about high-velocity sales. Being able to bring products to market, all the things that you learn as a CTO and a general manager of sorts.

So now I take a break from industry, nine months at home with family. And I continued because of my Splunk relationships to stay in touch with my colleagues in the national security side of the house. And so that DOD has this program where people can leave industry and join them for up to five years, and help to apply very strategic problems. And so I went up and did that in support of special operations, hardest, most meaningful work of my career. And what's amazing is the people I worked alongside. So when you think about people in special operations, you think about fast runners or great shooters, but that's not what makes them special. What makes them special as they are learn-it-alls that can work as a team to solve any problem under tremendous pressure.

And so I have this incredible experience solving problems that matter, working alongside these incredibly talented folks that no one will ever hear about or know what they have done. And so it's just been awesome. I think every leader, as part of their journey, should take a break from industry to solve problems that they care deeply about all alongside people that truly inspire them. And in many ways, I view this almost like an MBA or a leadership course for two to three years, and then being able to take those experiences to become a better executive back in the industry.

[00:19:19] CS: And also sort of the – I imagine, similar to the pressure cooker you said before of the Munster experiment.

[00:19:27] SA: Exactly. My military experience is watching Tropic Thunder and Jack Ryan. Like I never enlisted, I knew nothing about rank. I had no practical experience to relate to the folks to my left and right. And that was my concern. But when you come into a completely new environment of what you know no domain, the goal is to keep your mouth shut, learn and listen. Earn the right to be there. And, actually, that goes back to my early days at IBM. Show up, keep your mouth shut, learn and listen and earn the right to be there. And then a lot of the organizational transformation and digital transformation work I did at GE directly translated to leveraging technology as a competitive advantage within the Department of Defense. And so what you'll see is a lot of parallels from these early experiences that shaped me as a wide-eyed 22 year old to serving in this incredibly unique capacity in a completely different world where failure – You're working on problems where failure isn't an option. And so the pressure that comes along with that.

[00:20:31] CS: Yeah. So let's kind of swirl all the things that we've discussed together, all your different past opportunities and learning experiences. And let's talk about what made you want to find horizon3.ai. What was the need that you saw in the industry that wasn't being fulfilled? And how does your company fulfill it in any way?

[00:20:47] SA: Yeah. So when I left Splunk in 2017, I had done so with the ambition of starting a company. I felt that I had a mask, the technical depth and breadth, and the business depth and breadth to be able to be a good CEO. And there's a level of maturity in there I think that's atypical, which is oftentimes you'll see founders graduate college and they want to go start a company right away. And that's awesome. I had always had this ambition, but I really wanted to build up the right Lego blocks and skills before I did that. And so I built everything. My career was all with my sight set on starting and founding a company and growing and scaling that company.

And throughout that time, back to hacker experience, I'd always have this vision of being able to look at anything. I can look at a car, a train, a boat, or a data center, in three clicks, I should be able to hack that thing. And of course, it wasn't even possible. But I wanted that level of how do I apply, at the time, it wasn't really AI, but how do I apply AI to concepts to discover what's on that machine, fingerprint it and figure out how to own it?

And when you start a company, the most important part of formation is the right cofounder and the right team. And I didn't find the right cofounder or the right team after I left Splunk in my nine months off. So when I went into the special operations world, I, once again, was surrounded by these incredibly talented folks. And as they retired from the military, like, “Hey, look, we've got three years of hard, painful time together. We like each other. I understand your skill set. I think you'd be the perfect cofounder.” And so Tony, my cofounder now, was a Deputy Chief Technology Officer of mine within my national security world. And I said, “No, you're going to retire, and we're going to start a company.” He said he was in and we went.

So the problem that I saw was, when I was at GE Capital, when I was at Splunk, I was at the Department of Defense, I had no idea if my security tools actually worked. Am I logging the right data in Splunk? Am I alerting on the right things? Do I have the right defense in depth strategy? Is my zero trust actually working? The only way I knew that stuff work is if I got breached. And at that point, it's too late.

Now, the way to assess your posture is to either wait for a hack or pen test yourself. The problem with a pen test is you've got these consultants that show up for 6 to 12 weeks. They're going to do a bunch of manual stuff. They're going to poke you in the eye. They're going to slap you in the face. They’re going to tell you how bad you are, and then they're going to leave. And then you're kind of stuck with this bag on fire trying to figure out how to fix it all. And then in that time, as you're fixing it, your environment has changed. The threats have changed. And you got to repay those consultants to show up.

So my question was, or thesis was, “How do I look at ourselves through the eyes of the attacker to identify our blind spots, our threat vectors, our ineffective tools and processes? And how do I use that attacker’s perspective to find and fix problems that truly matter?” And if that's my thesis, the how do I get there would be continuous automated penetration testing. And that became the challenge. The very first prototype that Tony and I worked on was running an Nmap scan and piping that output into Neo4j as a graph, and then running queries on the graph to determine what metacyclic module to invoke.

And Tony has a very secure home. He takes it very seriously. He runs it against his house, and his reaction was, “Holy crap! I didn't realize that this sound card on one of my laptops has an embedded server that was running. And it got exploited.” And so that's when we knew in that basic prototype that we were onto something. And then from there, we decided to start the company. We moved out.

[00:24:38] CS: That's amazing. So I love that story. And I love that it’s built of such a sort of crystal clear focus point. I want in three clicks to be able to sort of find my way into anything. That’s so interesting, rather than you hear business plans of like, “Well, I just want to get into cybersecurity. I want to do this. I want to –” To start at that point and then we've engineer from there into something you can actually do is pretty inspiring. So the main topic we wanted to discuss today is vulnerabilities. And you said that goes quite nicely with what you were just talking about. So when I was introduced to you, one thing you said especially stood out to me, which is that vulnerabilities don't always equal exploitable. I want to tease that out a bit with you, because with the President's recent executive order on cybersecurity, especially with the news on the Colonial Pipeline hack, there are a lot of talk about how this directive can be practically applied and what it means to disclose a million vulnerability. So tell me more about the distinction between vulnerabilities and exploitable vulnerabilities and why the former might not be something you want to spend your resources on.

[00:25:42] SA: Yeah, for sure. So back to my time at GE Capital. So our CISO at GE Capital, a guy named James Beeson, just a brilliant security thought leader in the industry. So I remember hanging out with him at some like meeting. Again, he was completely exasperated, as was my boss, Segall, at the time, we had just run a vulnerability scanner across all of GE Capital Americas. And there were something like 3 million vulnerabilities to go deal with. And of those 3 million, like a million of them were deemed critical. And we're all kind of looking at each other like, “Holy crap! How on earth are we going to remediate a million critical vulnerabilities?

And I just remember this like sheer look of terror on their faces. I'm like, “Well, it sucks for you guys. I'm going to go work on this stuff over here instead.” But it was that that experience of like, “Do we really have a million critical vulnerabilities?” And I kind of like sat on that concept for a while. And then back in the department, as my HTV role, same experience. Run in ACAS scan, or a vulnerability scan, hundreds of 1000s of vulnerabilities.

Now, as I started doing the deeper analysis, what I realized is, of those 100,000 vulnerabilities from your Qualys report or your Nessus report, maybe 10 or 15 can actually be exploited. And the difference there is having a critical vulnerability on a laptop – Or a concrete example, there was a vulnerability where it was deemed critical. But in order to exploit it, it required you have physical access to the server. And so that means if an attacker is going to exploit it, they need to be in your data center hardwired into the network backplane. The odds of that being exploited are pretty low. But the problem is it was ranked like a 9.8 out of 10. And so we had to patch this thing. So that's an example where, yeah, you're vulnerable to it. But it either is not exploitable, or the exploitability is so highly unlikely, but it's not truly a critical risk.

And so I really got into this notion of what does it mean to be exploitable? And how do I start to better assess those 100,000 vulnerabilities to figure out what truly matters or not? The other problem I had was, when you run a vulnerability scanner, if you've got 100 machines in your organization, you're basically running 100 individual vulnerability scans. Each scanner is only assessing that one machine it's on. And then the report you get is an aggregation of all 100 findings. But that's not how hackers operate. When you're a cyber attacker, you're not trying to compromise just one machine. What you're doing is you're chaining together a problem from one machine with a misconfiguration in another with a credential you found somewhere else. And you're hopping across from one machine to another. Well, vulnerability scanners don't tell you that. They don't tell you how an attacker could go from a local unauthenticated user, to domain user, to domain admin. And that's the real threat.

When you look at ransomware, I bet you, almost every company in the news for ransomware runs vulnerability scans. Yet they're still getting owned. Because vulnerability scanner – Or because ransomware doesn't look at vulnerabilities on a single machine. They're trying to get to domain user and domain admin and then borrow. And then from there, they're off trying to lock encrypted data. And they’ve got keys to the kingdom at that point.

That’s kind of my big epiphany. And I've been on this quest to help people realize that being vulnerable doesn't mean you're exploitable. And the hardest part of the job in security is deciding what not to do. And how do I help you decide what not to do because I've been in your shoes and that people in my company have been in their shoes. We've been practitioners. We've skipped lunch and canceled plans with our families to solve problems. Let's make sure that you're solving problems that truly matter.

[00:29:47] CS: So what strategies do you have or recommend for prioritizing these types of vulnerabilities that your security team works with? And what do you recommend be done with the lower priority ones? Are these low risk vulnerabilities something like a tooth with a cavity in it where if you don't monitor, it can get worse. Or are there really certain types of vulnerabilities that can just be out of sight out of mind? Oops, you're muted.

[00:30:11] SA: Sorry. Excellent question. We actually just did a pretty deep tech talk on this topic. And when you look at that tech talk and you think about what is actually vulnerable versus exploitable. So first, for that vulnerability, does an exploit actually exist? I think it was Kenna Security who put out a research paper that said that most vulnerabilities – Sorry. Less than 2% of vulnerabilities, CVEs, actually have an available exploit, less than 2%. So that means, step one, does an exploit even exist? If not, then it requires an attacker to do advanced exploitation research, at which case, you're starting to talk about a nation state that really wants to go after you. Or you're talking about a zero-day market, which is a very different problem set. But question one is do exploits actually exist? Yes or no?

The next question is what is the complexity for exploitation? Is it incredibly complex? For instance, you need physical access to the network backplane? Or is it very simple and allows very easy for an attacker to go off and exploit it, or it can be automated in its exploitation. So what you want to look at is what is the complexity or the conditions required in order to exploit that vulnerability?

The third thing that I look at is – Think of a vulnerability for Apache, like an Apache Web Server, and you're going to get this a lot. Or for any, like an IBM component or so on. Well, is this one thing to have a vulnerability for Apache generically? But oftentimes, those vulnerabilities relate to very specific plugins or components. So is the suspected software truly vulnerable, because that component or plugin is installed? Or it's in that configuration that enables to be vulnerable or not? So you end up asking a deeper set of questions that requires additional analysis.

Oftentimes, these scanners will tell you about outdated software. So kind of the fifth bullet point here is, or fourth bullet point, is if you've got outdated software, that's bad hygiene. But that doesn't mean it's exploitable. An absence of a specific vulnerability or configuration, that is important to add to the backlog of work to go clean up. But it doesn't mean you've got to skip lunch and cancel plans with your families right away.

[00:32:29] CS: Right. Right. Yeah, yeah. We’re not going to stop until this version of Adobe Acrobat has been successfully brought up to the present day.

[00:32:37] SA: That's exactly it. And a couple other things is that, back to that component and user, is that part of that of the product actually accessible? So for instance, you might have VMware vCenter vulnerability that's absolutely critical. But it's only exploitable if vCenter is publicly exposed to the Internet. Well, not a lot of people expose vCenter to the public Internet. So is that truly an issue? Yes or no?

And then finally, where are you in the network? If you are applying zero trust, the more zero trust you apply, the more segmented your network and the harder it is to move around. So you might have a vulnerability that's critical. But if it's on a lab machine that doesn't enable data theft or systems disruption, is it really that important? Or you might have a vulnerability in your guest network, but it doesn't enable them to break out of the guest network. Is it truly critical or not? So that network context matters in how you prioritize what to fix0?

[00:33:37] CS: So as you said in our conversation for the podcast, due to our current concept of pentesting, vulnerability monitoring, and threat hunting, and so forth, “Security teams are running on empty trying to patch everything and all things, when in reality, they don't need to.” So can you walk us through a more sensible use of a security team? Like what are the more beneficial uses of a team than trying, as you say, to patch anything and everything? Once you freed yourself to not be doing that, what higher value tasks can the security team do instead of strengthen the security of your network if they're not picking away at vulnerabilities that are never exploited anywhere?

[00:34:12] SA: Yeah. So there's this – Similar from a cultural standpoint first to begin with. So when a pen testing team comes in, consultants, or the red team comes in, oftentimes their objective is to embarrass the blue team, or embarrass the defenders. And they're going to come in, poke you in the eye and show you how you suck, basically. And it's a very contentious relationship. But the most effective security organizations have the red team and the blue team working together. And the red team's job is to help identify those ineffective tools, processes, policies, those blind spots, and so on. And their job is to inform the blue team where the problems are so they can go off and fix it. And then once they're fixed, the red team can verify that the problems are truly fixed. And it's this red plus blue that sets these conditions for a purple team culture, right?

The most effective security organizations have the red and the blue teams working together. And they're constantly assessing to make sure they understand what are the attack pads, or threat vectors, or kill chains that are truly exploitable? And what are we doing to stop or plug the holes that enable that exploitation?

[00:35:29] CS: Do you have any practical advice coming to a company where the red and blue teams are, if not, out-combative or a rivalry with each other, but at least are not collaborating as well as they could be? Do you have any advice for sort of changing this sort of competitive culture between the two teams?

[00:35:48] SA: Yeah. This goes back to my experience as a leader, and looking at how do you get teams to work well together. And so back to even the culture I talked about from the special operations standpoint. Learn it all. Work well as a team under pressure to solve any problem in front of them. There is a tremendous amount of pressure on security practitioners today. There's way too much – There are way too many problems to fix. There're not enough of them. And oftentimes, the blue team in particular, they're already jaded, because they feel they're chasing and fixing a bunch of problems that can't be exploited. They know it's not a real issue. Yet they still have to go fix it, because some bean counter expected them to. And so you're already are operating with the blue team, a set of folks that are overworked, not seeing their families enough, wasting their time from their opinion.

On the red team side, you've got a bunch of folks that are equally frustrated, because they can see problems. And then they'll come back a year later, and they'll see the exact same problems are still there. They're basically throwing their papers up in the air saying, “What's going on here? I showed you this a year ago.”

And so as a leader, as a CISO, or Director of Security Engineering or so on, either the board, what you want to look at is how long are your critical exploitable problems lingering? Are they sitting around for a day or two? Or are they sitting around for a year or two? How long are these issues sitting around? And what is your remediation time? How long does it take you to actually fix them? And how do you cut your remediation time from, say, 12 months, down to one month, down to one week, down to a day? The best shops that we see, when they've got a critical exploitable problem, within 24 hours, they fixed it. And they're rerunning a pen test to make sure that that problem is truly fixed. And it's that remediation time that you want to look at.

The next thing you want to look at is if you're running a pen test every week, which with consultants sounds ridiculous, but with automated pen testing is actually possible. In fact, many of our customers in Horizon3 run six to eight pen tests per week. They're running pen tests every single night. And in the morning, they're doing a diff between pen tests today and pen test yesterday and saying, “What new problems that we find and why?”

When you look at that progress over time, you've got the data as the board and leadership to say, “How many new exploitable issues are we introducing every day or every week?” And is that indicative of a deeper rooted trading issue, governance controls issue, or architecture review issue? You're going to surface policy and process problems because you're adding so many problems with a week. And then how long does it take to fix those issues? And how many of those issues are unchanged? And now you've got the data to show what is your posture today? And how has that posture improved over time? Which, back to dealing with highly regulated entities, when I was at GE Capital dealing with the board, dealing with financial regulators and so on, being able to clearly articulate your security posture now and how that posture has performed over time, that the secret that every regulator wanted. But there just wasn't the ability to do that with any sort of accuracy. And that was one of the problems that we thought we'd solve in Horizon3.

[00:39:14] CS: Okay. So I want to shift over a little bit to the worksite of cybersecurity, cyber work, and I wanted to see if you have any career tips you can give to people who are looking to get into pen testing, or red teaming, or related careers. Are there particular skills or areas of interest that would be especially useful to have especially in 2021 over other things?

[00:39:34] SA: Yeah. So when I look at the best security engineers, whether it's a security defender, or an offensive person, their core competency is they are world-class network engineers. That is the common denominator. They are experts in networking. They are experts in network engineering. They've had network administrator roles at some point. That’s what truly sets great security engineers and pen testers apart from others. If you want to go pass your OSCP certification, your Offensive Security Certification Program, which is a very difficult ethical hacking certification to get, it is very difficult for you to do that without already being an expert in network engineering. So that is the fundamental kind of core technical skill that you need. It's not that you need a degree in network engineering. But oftentimes, you'll see those practitioners have multiple certificates and accreditations in the networking realm and that they’ve used that as their foundational knowledge.

Now, that is for infrastructure pen testing. That is separate from application pen testing. So if you want a pen test in application, that's a very different skill. You are a developer, more so than you are a network engineer. And as a developer, honestly, you're a really good troubleshooter, or a diagnostician. Not a pure dev. And so you've spent your time debugging software and debugging code, doing level three support, those tend to be the skills that are the foundation from which to go off and train and learn how to do application security testing.

[00:41:12] CS: So I want to demystify a little bit of red team and blue team as work. I think we always sort of imagine, “If I'm a red team, I’m out there, I'm going rogue, I'm breaking into everything, I'm throwing thumb drives all over the parking lot, and what have you.” And I kind of want to see if we can talk a little bit about some of the things that you should enjoy a lot doing a lot when considering one job or the other, red team, or blue team, or some other related thing. Are there boring, or repetitive, or fine detail aspects of each that novices should know about before they take the plunge?

[00:41:50] SA: Yeah, it's an excellent question, right? So if you think about different phases of an attack. So the most fun is going to be, for some people at least, the extrovert, is the social engineering phase. That is the write phishing campaign email that says we think you've accessed adult material. Like that's actually fun. That's more a psychological issue, or psychological skill set than it is a pen testing skill set. You're thinking about the human as the weakness, and the various ways to affect that human.

The simplest one is going to be – And you saw this, by the way, with those old Nigerian prince spam emails that we would get in the 90s, in the 2000s. And you would read it, and the grammar was off. All the words are misspelled. People realized that was actually done intentionally.

[00:42:39] CS: I was just going to say that I heard a podcast about that the other day. Please tell our listeners.

[00:42:42] SA: Yeah. It's fantastic, right? So the theory of the attacker at the time was a person educated enough or aware enough of the details to realize that this email is grammatically incorrect, and there're tons of spelling errors, are least likely to actually be duped into this scam. But those that replied, despite the grammar, and the spelling, are more likely to be successful in being scammed. So they actually used poor English in their emails as a filter to figure out, “Are you going to be successful in scamming this person or not?” And then from there, they do additional techniques, yeah, to weed them out. And eventually, there were 2% to 3% success rate at 100 million sample sizes. It's pretty decent number of folks.

And so that was kind of interesting part, it was the psychological aspect. Putting thumb drives in the parking lot. All it takes is one person to plug it in. And in fact, there are books written about some of the biggest breaches in the Department of Defense back in the mid-2000s in Afghanistan, where because supposedly a nation state had compromised all the thumb drives in all of the bazaars outside of particular military base. And sure enough, eventually, somebody went in, bought a thumb drive and plugged it into their secret laptop. And that's how they're able to compromise the network. This is in, I think, Dark Territory was the book where they talked about this. And so all it takes is one person. So there's the social engineering side of red teaming that's much more about psychology.

Once you get in, and an attacker will get in, it is inevitable. There're just too many doors and windows for an attacker to get into. Now you're getting into the network penetration testing site. That's where your network engineering expertise comes into play. And then from there, you're going to fingerprint a bunch of standard vendor software, Cisco switches, and VMware environments and so on, for which there are exploits and misconfigs to go take advantage of. But you're also going to find custom applications. And that's where the application pen testing skill set comes into play. So it's how you tie those disciplines together into an end-to-end attack that I think is incredibly interesting and also very creative. Like you are an offensive cyber operator, whether at a Five Eyes nation state, or an adversarial nation state, or a criminal organization. They have invested the time to master their craft. They are passionate and are the best in the world at what they do. And you can never underestimate an enemy that is investing to master their craft. And I think that's one thing we've got to recognize is that we are at a disadvantage when you look at the amount of passionate expertise and horsepower there is in the attack side and that the defender stand no chance at the moment.

[00:45:40] CS: Yeah, I had created, it's kind of a bonus question in my head while you were talking earlier, and that leads perfectly into it. So I think we're all kind of blown away here. As you as you say, you were dropping your soldering iron at age 10. And you've been breaking things down since teenagerhood, and you've been wanting to get into systems in three clicks and stuff. And I think there're a lot of people who are like, “I'm passionate like that. I just need the first step.” But I think there're a lot of people who are like, “I want to do this, but I don't feel like I'm quite that at that level of obsession and intensity.” As you're saying, we need that kind of obsession and intensity. So if you're not naturally predisposed to that, how do you become that obsessed and intense? Do you have any suggestions for people who need to fake it till they make it?

[00:46:29] SA: I think that the barrier to entry to learn how to do this stuff in the 90s and 2000s was way higher than it is today. When you think about in the 90s and 2000s, if you wanted to learn how to be a hacker, or be a cyber operator and so on, I mean, you were learning through forums, and you were learning through bulletin board services. You were learning by reading computer architecture books and playing around with physical machines you had access to. The barrier of entry has been dramatically reduced. Where today, between a couple of Coursera classes, a couple of paid SANS courses, and a Cyber Range, to Hack the Box, you've got the tools necessary to be absolutely awesome on the offensive side. I think, for instance, platforms like Hack the Box are game changers for learning how to be an offensive cyber person.

[00:47:21] CS: Yeah. Love it. Okay. So anyway, yeah, I guess that's a really great place to start, or to end on here, is that, yeah, the tools are so much more accessible. So I just want to get any final thoughts from you on first steps you'd recommend to help get yourself on a good career path, and especially if you are someone who maybe is doing a different aspect to cybersecurity, or maybe you're an auto mechanic, or maybe you're a child psychologist and you want to get into this. Do you have any sort of tips for people who want to completely put the brakes on and sort of angle their way into this? Is there ever a chance where it's too late to get in?

[00:47:59] SA: It's never too late to get in? Absolutely not. It comes down to your passion and conviction. So let's take one of the best architects we had at Splunk, a guy named Dave Simmons. He was a Carnegie Mellon graduate, but he was a home builder. Like his real background before he went to school was building houses. And it made him an incredible architect, because if you want to go build a house, you've got to take the outcome, and then decompose it into its piece parts. And you've got to figure out how to layer the foundation, with the plumbing, with the wiring, with the framing, and so on and so forth. So the mentality of taking an end state and then decomposing it and then being able to execute against it is what made him a great architect. And so similarly, from a from a cyber standpoint, if you are an auto mechanic, you are a world-class troubleshooter. You're able to take a little bit of – Some basic symptoms, and figure out through systematic troubleshooting and diagnosis where the problem is. And so you've got these. You just have to recognize those inherent characteristics that can't easily be taught and then figure out how to do the gift wrapping. And that gift wrapping is going to be that SANS class. Or the gift wrapping is going to be the –

[00:49:22] CS: The Infosec class?

[00:49:23] SA: The Infosec class. Yeah. Sorry. The Infosec class. Exactly. That’s Exaxctly it. But people think that the core skill is the network engineering details. That’s not. The core skill is your ability to structure hard problems and break them down, or your ability to communicate, or your ability to persuade as a social engineer. Everything else is gift wrapping. And I think it's never too late for anyone to start.

[00:49:48] CS: That's great. So I'm going to wrap up on that. Thank you very much for taking your time here, Snehal. And especially taking time on when you'd rather maybe be doing other things. I see a nicer sunny vista behind you there. So I'll leave you to your day. But as we wrap up, if our listeners want to know more about Snehal Antani or horizon3.ai, where can they go online?

[00:50:10] SA: Yeah. So horizon3.ai is the URL of the website. You can go there and check it out.

[00:50:14] CS: Numeral three.

[00:50:16] SA: Yep, horizon3.ai. You can find me on LinkedIn. You can Google my name, and you'll find the luxury of a unique name in the age of Google is it's very easy to find me. And I've got a number of talks throughout my career that I think are still on YouTube that talks about the lessons I've learned throughout all these phases and stories. And I look forward to hearing from the audience. And I'm super easy to find and reach out to.

[00:50:43] CS: Great. Well, listeners, you've now got your extra credit for this particular episode. Go hit the YouTubes. So, Snehal, thank you for joining us today and sharing your history and story with us. Really appreciate it.

[00:50:53] SA: No. Thank you. I really appreciate it, Chris.

[00:50:55] CS: And as always, thank you to everyone listening at home, at work, or at work from home for listening today. New episodes of the Cyber Work podcast are available every Monday at 1pm Central both on our YouTube page and on infosecinstitute.com/podcast, or an audio wherever fine podcasts are downloaded. To read Infosec’s latest free ebook, Developing Cybersecurity Talent and Teams, which collects practical team development ideas compiled from industry leaders, including professionals from Raytheon, KPMG Cyber, Booz Allen, NICE, JPMorgan Chase and more, just go to infosecinstitute.com/ebook and start learning today.

Thank you once again to Snehal Antani, and thank you all for watching and listening. We will speak to you next week.

Free cybersecurity training resources!

Infosec recently developed 12 role-guided training plans — all backed by research into skills requested by employers and a panel of cybersecurity subject matter experts. Cyber Work listeners can get all 12 for free — plus free training courses and other resources.

placeholder

Weekly career advice

Learn how to break into cybersecurity, build new skills and move up the career ladder. Each week on the Cyber Work Podcast, host Chris Sienko sits down with thought leaders from Booz Allen Hamilton, CompTIA, Google, IBM, Veracode and others to discuss the latest cybersecurity workforce trends.

placeholder

Q&As with industry pros

Have a question about your cybersecurity career? Join our special Cyber Work Live episodes for a Q&A with industry leaders. Get your career questions answered, connect with other industry professionals and take your career to the next level.

placeholder

Level up your skills

Hack your way to success with career tips from cybersecurity experts. Get concise, actionable advice in each episode — from acing your first certification exam to building a world-class enterprise cybersecurity culture.