[00:00:00] Chris Sienko: Every week on Cyber Work, listeners ask us the same question. What cybersecurity skills should I learn? Well try this, go to infosecinstitute.com/free to get your free cybersecurity talent development eBook. It’s got in depth training plans for the 12 most common roles including SOC analyst, penetration tester, cloud security engineer, information risk analyst, privacy manager, secure coder and more. We took notes from employees and the team of subject matter experts to build training plans that align with the most in-demand skills. You can use the plans as is or customize them to create a unique training plan that aligns with your own unique career goals. One more time, just go to infosecinstitute.com/free or click the link in the description to get your free training plans plus many more free resources for Cyber Work listeners. Do it. infosecinstitute.com/free. Now, on with the show.
Today on Cyber Work, Paul Giorgi of XM Cyber helps us wrap up 2022 by discussing some of the most unusual and complex attack paths that he and XM have seen in the past year. We discussed some of the most common breaches and methods, as well as a number of attack paths that are the very definition of taking the scenic route, which is of course why they worked for so long. Also, tune in for some great advice about getting involved in the work of risk management and access management. That’s all today on Cyber Work.
[00:01:30] CS: Welcome to this week’s episode of the Cyber Work with InfoSec podcast. Each week, we talk with a different industry thought leader about cybersecurity trends, the way those trends affect the work of InfoSec professionals while offering tips for breaking in or moving up the ladder in the cybersecurity industry. Paul Giorgi is the director of sales. Aha, I have a better bio for you here. Paul Giorgi got his start in cybersecurity in Southern California with a few DOD and DHS government contracts mostly focused within network security in the late 90s. He moved to Columbus, Ohio in 2006 and joined FishNet Security, I like that name, where he found his love for sales, engineering and solution architecture. Since then, he has held many positions including CTO, solutions director and principal architect focused on security architecture, design, testing and integration, testing, helping large enterprise organizations combat malware, ransomware and other risks.
Today, we are going to be talking with Paul about XM Cyber and specifically some of the more unusual attack pads that they’ve noticed in their research and in their year. So last year, we had talked with people about some of the unusual tech that was being used on people who are visiting home for the holidays and traveling for the first time in a while. But today, we’re going to talk about unusual attack paths, and things like self-driving fleets and so forth. So Paul, thank you for joining me today and welcome to Cyber Work.
[00:03:02] Paul Giorgi: Yeah, thanks. So great being here.
[00:03:05] CS: So let’s start like I always like to do with your origin story. How did you first get interested in computers in tech? What was the initial spark of inspiration? How far back does it go?
[00:03:15] PG: Yeah. I mean, I grew up in the ’80s, where electronics are magical, and Speak and Spell, and these types of things were seemingly like these just magical devices. My parents would get them for me and the first thing I would do is take them apart, properly was traditional way to –
[00:03:33] CS: You went straight to the hardware then. You want to see what was inside the guts of the thing.
[00:03:37] PG: I think I just always wanted to understand the way things worked. I mean, opening it up mean like, “Okay. I don’t know what any of these components are, but now, I’m trying to identify things.” So that was kind of where it started. My dad would always bring home his old computers from work. For me, I mean, it was early days of computing, so I remember like my first computer that I got gifted to me from my dad was an old IBM 8086 with a 10-megabyte hard drive. I remember being like, “This is amazing.” Then, few years later, my dad was like, “Hey, this is now my retired 386. It’s an old IBM too.” Then, of course, I’m taking that apart, and breaking it and playing with it. I love playing with things.
Then, my dad would bring home all the software that he would have, and I’d load it up and play with it. Of course, there wasn’t Google or anything so you couldn’t like figure stuff out. You just had to learn by breaking and just doing things all wrong. Eventually, the world of modems and BBSs kind of like just blew my mind. The idea of like, hey, this computer over here is talking on 300 baud to this computer over here, and I could transfer files and messages. Then that just kind of propelled me into the world of computer security. Because as soon as I started thinking about, “Hey, I can do this with this other computer remotely. I get started to think like, “Well, what can I do that they don’t expect? How can I break things? How can I do things unexpected? Kind of going back to that same mentality of when you get the Speak and Spell, break it and take it apart, and I just have that same sort of interest in what could be possible.
[00:05:15] CS: Now, it sounds like you’re able to sort of break these things, you were able to also put them back together, then I presume. Sometimes, maybe?
[00:05:23] PG: Not all the time. I think like – it’s actually a really good learning experience to break things to the point where like, “I cannot fix this and this is why. This is the thing that I can’t get past.” That’s a good experience in itself.
[00:05:37] CS: Did you? Were you also interested in programming languages and software type stuff or was it more sort of the tech side of it?
[00:05:46] PG: That came eventually. I remember for my fifth grade Christmas present wanting a copy of physical disks of Red Hat Enterprise Linux.
[00:05:54] CS: Wow.
[00:05:56] PG: Yeah. That was – I kind of had like that type of unusual interest in programming. I mean, I remember learning different, like basic, obviously, was one of the first languages learning. I learned ADA kind of really didn’t take off as a programming language. Visual basic was another one that I got focused on. I always just love an inch-deep and a mile wide kind of thing. So I never would call myself a programmer or anything, but I could script, and I would write services and different API stuff, and just enough to understand how it works. As soon as I understand how it works, I would just move on to the next thing.
[00:06:34] CS: Chase the next exciting development point.
[00:06:36] PG: Exactly.
[00:06:37] CS: Well, there’s a lot of benefit to that as we can see here. So I’m looking through some of your career highlights on LinkedIn. That’s another way I would get to know our guests. One job title comes up several times, both in your current role and in past roles. I’m speaking of sales engineer. I know many of our listeners use this podcast to help narrow down the type of career they’d like to have in tech or security. So can you walk us through in a broad way what a sales engineer’s responsibility are? What type of skills and qualifications do you need to enter a job role like that?
[00:07:08] PG: Yeah. I mean, I may be biased, but in my opinion, I think sales engineering is really the best role in the world. Because you get to do all of the tech fun stuff, but you also have like the pay and a lot of the extra perks that you get out of a sales role. For example, I can’t tell you how many sports things that I’ve been to. I’m not interested in sports at all, but hey, all these customers want to go see the Super Bowl or the Final Four I’m like, I guess I’ll go and I don’t know who’s playing, but I’ll go. So like, you get all those perks from a sales, but you don’t have the pressure of sales. When the team isn’t delivering on the quota from a sales perspective, they never fire the sales engineer. It’s always the salesperson who has that pressure. So you get all of that sales role, but all of the tech fun stuff of being an engineering role without the responsibility of actually expecting it to get it to work.
[00:08:03] CS: Interesting.
[00:08:04] PG: I can make outlandish claims, not claims, but I can be like, “Hey, these are all the things that you can do with this solution, but I’m not actually necessarily the one in charge to get it to work.” I kind of sell the idea of this solution and what is a and say how it can fit in that organization or how it could solve their problem. Then more than likely, if something breaks, or, “Hey, we’re trying to get this stood up.” That’s not on my plate. I’m really in charge of just helping the sales team sell the solution and handling all the technical side of the sales process. So figuring out what the customer struggling with, figuring out the details of their environment, trying to align use cases, and testing, and do proof of concepts and all of that type of stuff when it comes to the sales process, but you’re on the technical side of it.
[00:08:53]CS: Okay. Yeah, that makes more sense. I was having a hard time sort of visualizing in my head. So for instance, with our skills platform where people learn cybersecurity skills online. Like the salesperson is like, “You need this and they say, “Yes.” But they don’t necessarily know how to do all the walkthrough, so they bring in the person from the content department. Okay, gotcha. So you’re sort of showing the sort of the tech walkthrough of what you guys do. Is that right?
[00:09:19] PG: Yep. Yeah, exactly. A salesperson’s whole job is just to sell a thing. They may or may not even be familiar with it or even how to turn it on. They don’t know how it’s going to help that organization, but their whole job is just keep making phone calls until someone says, “Hey, that sounds interesting.” As soon as they have that interest, then they bring in the sales engineer to kind of figure out like, “Hey! They said they’re interested in our thing, figure out why they’re interested in and let’s try to sell it to them.”
[00:09:42] CS: I imagine that it helps you to sort of learn new scenarios within your products as well, right? Because you’re not just doing the thing that it was built to do, and now you’re sitting there, they’re saying, “Well, we needed to do this one particular thing and you’re like, “Well, no one’s ever asked for that before, and so then you have to kind of like sit with the product and figure out how to make it work.” Is that right?
[00:10:01] PG: Yeah, kind of. Over my career, I’ve switched between two different types of sales engineering roles. There’s the VAR side of things, like the value-added resellers, like I worked at Optiv, and FishNet and DeFY security. These are all – we’ll sell you anything. When you’re doing sales engineering for that, that’s really difficult. Because one meeting, you’re talking about data security, the very next meeting, you’re talking about network security. Then the very next meeting, talk about GRC. Then the very next meeting, you’re talking about cloud security. So you have an expert at everything.
But then, if you look at like my LinkedIn experience, I’ve kind of throw in different vendor gigs throughout everything. I’ll get the experience of, hey, whatever and I’ll just be a generalist and I’ll get kind of really excited about a certain area, and then I’ll focus in on it and I’ll join a company, like I was at Exabeam for a while. I loved user anomaly detection, and behavioral analysis, and sim and those type of things. I was like, “You know what? I’m going to jump from where I’m at Optiv, and I’m going to take a sales engineering role at Exabeam.” I did that for a couple years, then I went back at DeFY. Then, they’re really excited about breach and attack simulation. So then, I was like, “You know what, I want to I want to just really focus on breach and attack simulation for a while.” That was about two years ago, and that’s how long I’ve been at XM Cyber.
[00:11:24] CS: Cool.
[00:11:25] PG: It is really cool hearing how all of the different ways that customers are interested in things. But when I’m at a VAR, I get to see like what I’m hearing most commonly. Like, “Man, people are really interested in UEBA, I’m going to jump into Exabeam.” Then, “Man, people are really talking about breach and attack simulation. I love it. I’m going to just really focus on that in the next couple years.” So that’s where I’m at.
[00:11:45]CS: That’s cool. Now, putting some sort of fence posts around that. What does an average work day look like as director of sales engineering at XM Cyber? Do you have certain types of responsibilities that happen? First, beginning of the day, do you have certain quotas or things like that? How much time was spent interfacing with clients? How much is spent interfacing with your technical team, things like that?
[00:12:13] PG: Yeah. As a director of sales engineering, I lead our sales engineering team. It’s still growing and I support the Americas team, so I’ve got a couple of other SEs that report up to me. From like the sales engineering world, I still am very much a coach player model, where I think that if most people saw the way I interact, they would have a hard time differentiating like, “Is he a sales engineer or is he the manager director of the sales engineers?” Because I still, I’m very much doing sales engineering stuff. I’ll focus on those type of responsibility. At any given time, I have anywhere from 5 to 10 POCs going on. During those POCs, there usually have some sort of like, “This is the stuff that we’re accomplishing this week. This is up for testing. Here’s the step for capturing.” So regular POC processes is definitely a large part of it.
Upon completion of a POC, you have to put together all your findings, and deliverables and then review them and then a presentation usually with some sort of executive sponsor. So that’s a different role. You always have to be kind of making yourself aware of customer environments, because like we’re saying, that there’s a lot – I mean, I’ll work with one customer and they’re like, “We’re fully in GCP. Everything’s functioning in GCP. We want to simulate a text within GCP.” I’m like, “Okay, great. Google Cloud environment. I’m familiar with it.” Then the next day, I’ll talk with somebody like, “We can’t stand GCP. Everything’s Azure.” So to be a really good sales engineer, you have to be familiar with the customer’s environment.
I would say that a good, large portion of my time is just getting familiar with customer’s environments, their solutions, their struggles. I have lab environments of all of those main cloud providers. I’ve got a huge lab environment at my house. We have lab environments at XM. The whole goal is just for me to be able to get hands on familiarity with what our customers are dealing with. So that way, I can help them address their struggles and their problems directly without just ambiguously saying, “Hey, I think this will help you. You figure it out.” I like to test it first before saying the solution can do that or solve this problem.
[00:14:19] CS: Yeah. Hearing that you have so many different irons in the fire at the same time. I’m imagining there’s a strong sort of project management component of this. You have to really be like very, very organized in terms of all the different pieces that are in progress. You’re asking for a test on this thing. Are you coming back? Are you answering this question while also getting data from this other group? I imagine it’s sort of a logistical challenge as well in that regard, right?
[00:14:45] PG: It is, yeah. You definitely have to be organized. Luckily, most sales teams have a lot of different tools to help you track that. I mean, Salesforce is obviously a common one that you create an opportunity, and you put all your notes and you’re tracking all your activities in it. So there’s ways to just – if you struggle with being organized, that there’s crutches to help. If you learn how to use those crutches, you don’t have to be the most organized person in the world. There definitely is – you have to be on track of, “Okay, we started this POC on this date. We’re going to end it on this date. They have this date to make a decision. It’s for this budget cycle and trying to keep track of those.”
[00:15:24] CS: You could tell that I was asking that question for myself, right? No, I am not that organized and that does make it sound a little more accessible, so that’s good. Thank you for that. By the time this episode airs, we’ll be well into 2023. It will probably be late January. But as we’re recording it, everyone has days out from checking out for the end of the year holidays, which is why we’re taking stock, and kind of looking back at different aspects of security in the year 2022. So your topic of choice was some of the strangest, most unexpected, most potentially dangerous attack vectors that you and XM Cyber encountered. So let’s start out, Paul, by taking a high-level view here. What were some of the biggest overall trends in cybercrime this year, especially as regards to breaches and vulnerabilities? What were the main areas that XM was focused on?
[00:16:11] PG: Yeah, 2022 was a weird year. We had a lot of weird high-profile vulnerabilities. Think about Log4j, Spring4Shell, Selina. There was a lot of really high-profile vulnerabilities where if I think back, most of the time, I feel like there would always be like one big one a year. For whatever reason, this year had a lot of them. Within breach and attack simulation, within what we do with attack path management at XM, we’re really focused on trying to see how can this new vulnerability introduced risk or be operationalized by an attacker to put your critical assets at risk. So that definitely played a lot with it. We have a lot of organizations, really assessing the whole stuff that’s going on in Ukraine and Russia, trying to evaluate like, “Hey, we have a sizable workforce over in Ukraine. What is the risk to our critical assets, maybe somewhere else in the world from your Ukraine or from Russia?”
Then being able to simulate like, let’s run through some simulation saying, “Okay. Of all of the people that we have in the Ukraine, if whatever, if they were an insider threat, or their account gets compromised, or an entity of theirs gets compromised, is there any risk to maybe some sort of critical asset, holding intellectual property, sitting maybe here in the state? There’s been a lot of questions around that, especially with XM. They’re saying, I want to run a scenario where all of my workforce in Ukraine is all compromised. Then what happens, and what risk is introduced and what are ways that they’re able to get to my critical assets. That’s another one.
Third-party risk was a really big one as well. You think about a lot of the different breaches that we had with SolarWinds, [Messiah 00:17:49] and trying to figure out like, “Okay. How do I, as a security team prevent these types of things from happening? We’re deploying SolarWinds and how am I supposed to be able to assess how security solutions are or the risks that I’m introducing of these solutions that are deploying. So assessing cyber risk from a third-party perspective is also a really big one.
The last two, ransomware attacks, everyone’s talking about ransomware attacks, and I feel like –
[00:18:16] CS: So much ransomware.
[00:18:17] PG: I feel like a good 90% of all the POCs that I have going on right now say, “Hey, one of the scenarios I want to simulate is a ransomware attack. I want to see what happens in my environment and how prone I am to be hardened against a ransomware attack or maybe the opposite. Then the last one is vulnerability prioritization. I think we now have seen that – I think I was looking at the CVE totals for this year. Again, they went up. Pretty much every single year, we have more and more. I think we’re 23,000 vulnerabilities for 2022. It’s just impossible for teams to actually patch, and remediate and track all these anymore. So prioritizing these efforts, figuring out a real way to address vulnerabilities in organization without just scanning everything, taking kind of account saying like, “Hey, we have 4000 instances of Log4j that are vulnerable.
Then, you have the board and executive saying like, “Cool. What does that mean?” Does this put us at risk? Can you put this into more terms that I can understand? Don’t just tell me how many instances we have of it. I think 2022, those were all the big ones that I can think of, but I’m sure there are so many others that it was just a fun world. I think another big one was we had a lot of people returning from remote workforces coming back into the offices. So we’re now starting to shift ways that we have implemented, like maybe zero trust in that front network environments and maybe shifting them around a little bit. And maybe instead of doing zero trust, maybe introducing more like micro, macro segmentation and adopting different technologies and strategies like that, Where, “Hey, we didn’t have a chance to try that out two years ago, because everyone was just distributed and working from home. But now that we have a better understanding of it, now we’re going to see if all of these things that we’ve been designing over the last two years actually work for all of our workforce that are back in the office.
[00:20:09] CS: Yeah, I was going to ask that. You’re absolutely right. I mean, we’ve just seen a barrage of just recurring, recurring, recurring issues, vulnerabilities, ransomware and things like that. You get a sense based on how chaotic this year was in that regard, that lessons were being learned. Is this the sort of thing where people just had to really take an L and then a year or two from now, everyone’s going to kind of get their footing back or something like that? Do you feel like that, that that this actually came to something at the end of the year?
[00:20:44] PG: Yeah, I think so. I think there’s always lessons to be learned. I think that that’s why I love breach and attack simulation so much. It’s like, how about we teach you those lessons without you going through the pain of customer data loss or an intellectual property loss. I think that anytime that we’re dealing with something new, there’s a chance for us to learn from it. I think that we definitely learned a lot of things. I think everyone’s still trying to figure out how zero trust fits in any zero trust network architecture fits into most organizations. I think that there’s been some big failures, a lot of people followed a vendor and said, “Hey, this is our zero trust solution,” and then didn’t realize that it’s more of a methodology and then really just had it fail.
It’s all the same sort of thing with DLP programs like years before that, where, “Hey, I bought a DLP tool and it does DLP for me.” Then you realize like, “Well, you can’t just buy a tool. You have to like do data lifecycle management, data classification and all these other things. If you don’t do those things, then a tool is not going to help you.” So I think that as we’re learning from 2022, and all the different things from the overwhelming vulnerabilities, the shift back to the workplace, the failed zero trust network architecture, and then maybe kind of coming together with like, “Hey, this isn’t zero trust, but it’s close and I think that this works better for us.” I think that that’s kind of the state that I see most organizations in.
[00:22:06] CS: Yeah. Now, I mean, I just got off a previous recording with another guest who was talking about the recent Pentagon directive to move DOD towards a zero trust solution. Do you have any thoughts on that particular implementation? We were trying to figure out whether or not like the deadline was reasonable considering just how many sort of input points or –
[00:22:32] PG: Yeah. I mean, I worked in the DOD for a while, that’s where I got my start. They’re always a trailblazer, but the last to adopt. They talk a big game, and then they never actually do it. I think that this is kind of another example of that, where they need to be some sort of authority and put together on paper frameworks and guidelines, knowing full well that they are going to adopt them. I put together an IPV6 architecture plan, it was 2002. Everyone was like, “Hey, we’re running out of IP addresses.” This was for a large DOD instance, as a whole class B, and we had this whole plan to do IPV6. To this day, it still has not been implemented, but a lot of the stuff that we wrote down, and reference material and plans was beneficial and it’s good for us to walk through that. I think that that’s kind of the same thing we’re seeing with the Pentagon’s implementation of zero trust. It’s going to be a great reference material, but we’re never going to see it actually implemented.
[00:23:32] CS: Or the timescale of that 2002, because I started working here in 2012, 2013. IPV6 at that point already had the feeling of like an environmental catastrophe. Like we have to implement this now and it’s like, here we are 10 years later, and then you’re telling me 10 years before that, they were saying, “It would be a good idea if we did this right now.”
[00:23:54] PG: Yeah. Or 2022, no one is really using NAT. NAT was this thing that was like, “Hey, everything just as a public, like a non-RFC 1918 address on it. I managed multiple Class B networks, and every single computer had a public facing IP address, which you think about like how that is today, it just blows your mind. Not only is it just a waste of routable IP addresses, but it’s really insecure. So yeah, now that we all are hiding behind NAT and some organizations have hundreds of thousands of devices all underneath a single IP address. It really lessens the severity of the IPV6 trends evolution.
[00:24:32] CS: Yeah, we’re on to other bigger fires now. Compared with some of the recurring examples that we mentioned previously, were there any especially convoluted complex or just plain strange examples of security pivots that lead to successful breaches and exploitations that you saw or tried this year?
[00:24:52] PG: Yeah, I think that there is actually a lot of them. I think my favorite ones combine multiple different types of things. I’ll give you an example. I don’t like or I prefer attack paths that don’t just say, “Hey, you use this vulnerability to get to this machine. And then from that, you compromise this other machine.” I like to look at real world examples of compromises, whether it’s colonial pipeline, or any of like the big ones and see what was it that happened. You look at, hey, a credential was where it started and it was just a compromised credential. Either they found it on the dark net, or just sitting in some repository, or a fish, however they got it, that’s the start of the breach. That’s like that initial breach point. Then from there, what’s possible, and then what’s the attack surface from that compromised credential. Then from there, positioning yourself like, “Hey, I can leverage these entities in this way to do this thing.” Then from there, I could position these entities to do this on this that way.
So kind of prefacing all of those things coming together to build out attack paths. A couple that I saw that were really interesting. We had a really large insurance organization run through different simulations to test out some digital transformation that they were working with. They realized that, “Hey, there is a really common choke point.” We call them choke points. You look at attacks, they all kind of converge to this one entity. It was all this one developer system. So this one developer system was not only misconfigured, but it had a vulnerability. So it has a misconfiguration, where it was looking for a proxy server and they didn’t use a proxy server in that environment.
When an attacker could see that broadcast, and say, “I’ll be your proxy server,” respond to it, and basically allow themselves to be a man in the middle. Once they position themselves to be a man in the middle, there was also a vulnerability on that same machine, that anything that you positioned from a Windows update perspective, would get run from the Windows update subsystem. You could drop any executable and say, “Hey, here’s a Windows update for you.” It would just say blindly, “Yes, I’ll run it.” So you can take a piece of ransomware, or malware, whatever you want to do because your position in that position as a proxy server, as you’re seeing that traffic. In those windows updates, you could just drop at a different executable, and it would run it.
We saw all of these attacks coming through this one developer’s system because it was misconfigured, it had that vulnerability. But what was scary about it was that developer system was really risky, because developers are lazy in a good way. They’re meant to be efficient. They don’t want to spend a lot of time doing two-factor authentication, so they introduce shortcuts. A lot of these shortcuts rely on private SSH keys that don’t – passphrases sitting in just really open areas of their system. A lot of them have their AWS credential sitting in their AWS CLI, so they could just quickly do things without actually having to authenticate anywhere. It’s all just done from command line. They often will have different API credentials sitting there. We saw that, man – once that machine is compromised, we are able to compromise a good 90% of the rest of the environment, both in cloud and on prem.
We realized that, “Hey, not only is it relatively easy to compromise a machine because of the vulnerabilities and misconfigurations.” But from that one machine, I could pick any entity in the whole environment of which there are hundreds of thousands. I can click on like an S3 bucket of theirs. I say, “Hey, look like you could trace it back to that one thing developer’s machine, click on this EC2 instance, and you look at attack. You’re like, “Hey, there’s that developer’s machine again.” What we did, we patched it, we introduced a lot of just, “Hey! We need to clean up your hygiene from a development standpoint.” We were able to reduce that scenario from 90% of their environment regularly being compromised down to only about 12%. So it was a huge reduction in attack surface because you get rid of those choke points. You remediate those one choke points, and instead of just saying, “We’re going to patch all vulnerabilities everywhere.” You now have this ability to say that one developer’s machine is really risky to us, because of everything going on on it.
I need to just fix the couple of ways that we’ve seen that machine get taken over and then use to compromise other machines. That was one really good attack path. I mean, we did for the month of October Cybersecurity Awareness Month. We had – I think it was every single day. All of us from the field, we’re sharing interesting attack paths. What I just heard from our marketing team is we’re actually putting together a book. So we’re going to have a book with all of these different interesting attack paths showing how people don’t necessarily think in the way that you think of from an attacker standpoint, from like a security defensive standpoint. You look at everything kind of myopically. It’s like, is this system secure. Yes. Okay, move over to this next entity. Is this entity secure? Yes, move over to the next. But it doesn’t necessarily mean that there isn’t still something on that machine that would allow you to then compromise this other machine and that’s really where attack paths become important.
[00:29:59] CS: Now, is there – when I hear stories like that, and we do hear them a lot and we hear about – this is the most interesting one too, because it’s a developer who’s there right now. So it’s an active account. A lot of things that you hear are like, “Well, there’s this old user that hasn’t been used in seven years.” Somehow, we didn’t even think to scan for it and things like that. I mean, is there a speed component to this as well or were they able to kind of leisurely set up this, this proxy server with the developer, and sort of make all this movement within the span of like a couple days, couple months? Or was this – had that been in place for a long time when you guys found it?
[00:30:45] PG: Yeah, I think it had been in place for a long time. The problem was, is as the developer continued to do more and more bad hygiene stuff, the risk just kept getting greater and greater. While, you could have always compromise that, that developers machine, it really kind of kept getting worse and worse, as the developer just kind of continued to use bad practices.
[00:31:08] CS: Got it.
[00:31:08] PG: I’m just going to start putting all these private SSH keys for all of our Kubernetes cluster sitting in this folder. machine, you now own all 8000 nodes of their Kubernetes cluster, because all of the SSH keys are sitting in. There’s no passcode. It was an escalating series of events that just made that developer’s machine very risky.
[00:31:32] CS: A case study of putting all your eggs in one basket. That’s amazing. Yeah.
[00:31:35] PG: Exactly.
[00:31:35] CS: So one topic that was brought up before the show was the Uber breach back in September. Going from that, a vulnerability that you found that would allow a hacker access to a server responsible for controlling unmanned vehicles. So a few years back, one of my earliest guests on the show was Alison Knight who worked heavily in this realm, up to and including writing a whole book about hacking connected cars. At the time, one of the most shocking things we discovered was that, some of these vulnerabilities could have been very easily and cheaply mitigated in some cases for the cost of a $2 FireWire cable or some such and the problem could have been avoided. What, in your experience, is the current security landscape for connected and unmanned vehicles? Are we still suffering from the – for want of a cable mindset, or have the struggles changed, and taken new shapes and such?
[00:32:23] PG: Yeah. I mean, I think that there’s a lot of topics that you hit on there. The Uber breach, I love using that as an example, because that is also a really great attack path example, where most organizations aren’t evaluating their risk from the lens of an Uber breach, where you start from – I think it was a compromised credential that happened from phishing, that led to them being able to access their internal network. From there, they were able to access another system, and that eventually, they got to some really sensitive stuff. That’s a great example of an attack path, where the breach point wasn’t necessarily some publicly exposed vulnerability. It was a credential that was compromised.
So understanding kind of what would happen if this credential were compromised, or this third-party credential was compromised or an insider threat. Running through those is a great start or a breach point for an attack path, and then carry out and combining behaviors and vulnerabilities to then see what’s possible from that credential. I love that example. The whole like, what is it about the IoT stuff. Hey, is there a $2 cable that we’re just all neglecting. I guess from my perspective, I see a lot of focus on compensating controls, without even the awareness of how like direct remediations could actually play a role in fixing it. Everyone’s so focused on the shiny tool that’s supposed to prevent that from happening. I don’t think – I guess, I would say, I think there’s more people focused on the shiny tool to prevent bad things from happening than they are actually focusing on the actual remediation itself.
I’ll give a couple of examples like XDR and EDR tools. I mean, they’re great. I love them. But over and over, we’re seeing that they’re relatively easy to get bypassed, whether you’re doing NT kernel and hooking. I mean, there’s, a lot of different ways for you to bypass an EDR solution. Why do we still have so much trust in these solutions to prevent all these bad things from happening? I mean, configurations, bad hygiene, vulnerabilities, bad user behavior. It’s all like this understanding of hey, “Well, we have EDR. It will protect us. Oh my gosh. Why are we spending so much time fixing that when we should really be focusing on why don’t we patch that system? Why don’t we address the misconfiguration.” I can’t tell you how many organizations I go or I do POCs with, and we see just bad credential hygiene. Cash credential sitting cached in LSASS of domain admin accounts. If this machine were compromised, I could use a tool like Mimikatz or Lazagne or just dump it manual out of LSASS. Now, I have a domain admin credential.
There should be no reason why I should find a cached domain admin credential, because you should never be allowing interactive log ons on any sort of domain admin credential. That’s just bad practice. But instead of disabling that use, because there are GPOs, you can enforce saying, “Do not allow interactive log on from domain admins” or putting users in protected user groups. There are so many different ways instead of actually applying these direct fixes. We’d spend more and more money, and hire more and more security engineers, and architects to have all these compensating controls when in reality, the direct fix is probably easier than all this other stuff that you’re doing. I feel us as a security world, because it’s new, and there’s so many new solutions. We’re like chasing the next shiny thing. We’re so much more interested in all these compensating controls, and the sexiness of actually fixing their true risk is deprioritized.
[00:36:00] CS: Yeah, absolutely. I think it always ends up coming down to thinking through the problem before throwing money at it, or saying, “Well, everyone’s doing these new things. We better do it as well and stuff like that.” So obviously, we’re talking about a lot of – we’ve touched on a lot of things. Things like smart cars, and IoT devices, server control, the main fleets. It’s all pretty much here to stay, so we’re going to have to learn to make them safer. I guess you were already sort of talking about that. What are some high level across the board security recommendations you have for purveyors of these types of hackable and potentially dangerous technologies in the New Years?
[00:36:39] PG: I think that the way that I can relate to this or compare this to is, remember like hyper converged infrastructure. Remember when like VoIP was starting to take off. We can’t have voice phone sitting on the same networks as like all of our computers and stuff, we need to have these separated and have separate systems. Then, when hyper converged came around, and like you have your storage SAN running IP and it’s in the same IP space as your voice. And then the voice is also running right next to your computers. Then, oh my gosh, the consumerization of IT came around, and everyone has their phones and their iPads all sitting on the same Wi-Fi network. I think of this as just the next evolution of that same sort of mindset of, “Oh, no. There’s this new thing, how do I address it. We need to start thinking more holistically around risk. The reason why I say that is, we can’t just myopically, and from a real focused area, just assess IoT, or just assess smart cars, because that’s not the way an attacker looks at it. They’re opportunistic, and they’re going to say, “Hey, if I can do that thing over here to then get me over here, I’m going to do that” and whatever is the least path of resistance.
I think we need to start evaluating holistically across everything. I don’t think that there’s a lot of people who are evaluating IoT securities in regards to like, “Hey, let’s look at how active directory in my active directory environment plays a role within IoT. I think that we aren’t really looking at that. We’re looking at more like open source, and what hardware, and what OS and what vulnerabilities are on it. But not necessarily like how that combined with active directory might play a role. Or, “Hey, you know what, if a credential or some sort of artifact is compromised in that IoT environment, what could then be leveraged within the cloud environment or within my data center? Or from what choke points can I jump from my IoT network to other networks? There are so many ways to think about things that I think most organizations are still struggling with just the idea of like IOT security or smart cars.
When you look at where your critical assets are, where you’re likely breach points are and then all of the ways that all of those other entities play a role in compromising your critical assets from those breach points, then you really start thinking the way that an attacker looks at things.
[00:39:03] CS: Yeah. Well, okay, let’s start thinking about that. So for listeners who want to get into the work of creating and implementing solutions for vulnerable technology, like unmanned vehicles and more. Can you talk about the combination of work experience certifications and tasks they should be aiming for in their daily work life to get them sort of on the right path here? Are there certain experiences or backgrounds that these types of companies are looking for?
[00:39:28] PG: Yeah. I’ve always been a proponent of just playing, get certifications, get experience, get send from the lab as being the better way to learn. That was kind of the way I went. I started going down the college path, and then it was, I don’t know, about a year into college, I realized that I was like, “Man, I have the job that all of these people who are sitting in my class are hoping to get after it. I just really need to focus on the job and get as much on the job experience training as I can.” I’m technically a college dropout. From there, I really just focused on as many experiences and as many certifications as I could get and really just use that. Because now that I’ve been in this industry for, I don’t know, almost 20 something years, almost 30 years or so, you end up looking at, “Okay, there’s a lot of people who can have their parents paid for a four-year degree somewhere, and then they’ll have some sort of computer security title.” There’s also a lot of people who are chasing the money, where it’s like, “Hey, cybersecurity pays really well. I’m going to get in cybersecurity.”
That’s not somebody that I think has like the best ability to excel in this career. I think you need to find somebody who just passionate about whatever it is that they’re securing. I don’t think you jump directly into cybersecurity. You kind of need to have some sort of fundamental understanding underneath it. So whether you love networking, or whether you like application development, or maybe you like cloud architecture, maybe even like DevOps, there’s security components to all of those. Think about security as kind of being like a 102, or a 103 cloud, or a 201 or a 301 class in college. Where to really get to that next step, you have to have an understanding of how things work. Because you can’t just jump into application security without understanding how applications work, or you can’t jump into network security without understanding how networks work.
My best career advice is just do it, break things, get a lab set up at your house, put many things in it as you can, break them over and over and over again until you understand how they break and then see how you can fix them. When I interview candidates, I usually spend more time talking about what they’re naturally interested in, rather than I am like, “What college degree they have or what certifications. Like what are you playing within your home lab? What makes up your lab? Have you played with whatever the latest cool tool is out there?” I’m like, “Hey, did you see this new sandboxing tool or did you get a chance to play with this new Helm chart that’s coming out?”
I mean, there’s so many things that if you are really passionate about what you’re doing, you’re just going to naturally start finding ways to be successful within the security industry. I searched for those people who are just passionate enough about it when they aren’t working, they’re still interested in They just naturally have this inclination to play with things, learn about things and break things.
[00:42:20] CS: Love it. That’s great advice. As we wrap up today, I want to ask, can you tell people a little bit more about XM Cyber, and your current offerings, and if you have any new projects or unveilings that you’re looking forward to in 2023. Feel free to trumpet about it.
[00:42:42] PG: Yeah. We at XM Cyber fall underneath the category of breach and attack simulation. It’s a very wide category. There are solutions doing like security control validation, there’s solutions like automated pen testing. While there is a little bit of overlap, we’ve been really focused within attack path management. Attack path management is evaluating all of the possibilities an attacker could do without actually doing them. What’s great about our solution is, you could run a scenario every single day using the telemetry we’re collecting and saying, “Okay. What we know about the environment. These are all the ways an attacker could compromise your critical assets.”
I use the analogy of Google Maps all the time. Think about Google Maps. When you’re in it, you have to define two things, you have to define a starting address and a desired destination. And Google Maps will say, “This is the recommended route. This is a route that avoids tolls, and this is a route that’s more scenic.” We do that same exact thing from an attacker’s perspective. You define a breach point, whether it’s a compromised credential like in the Uber situation, and then you define critical asset. And we’ll say, “Okay, these are the six steps, or these are the 10 steps that’s possible from an attacker’s perspective,” giving that insight. Remember we were talking about lessons learned when things break, or you have these compromised. We now have the ability to learn these lessons every day without the pain of recovery, and notifying customers and that type of stuff.
Overall, we’ve been trying to focus and expand our capabilities within breach and attack simulation. We acquired a company called Cyber Observer about six months ago. What they do is they evaluate your controls. Remember that what’s possible from an attacker’s perspective, now we’re going to be able to start showing how complex the attack path is, and taking into account the controls that are on there. When we were talking about bypassing EDR earlier, if there’s a step that we say, “Hey, this is possible on this entity, we’re now going to be aware that, “Yes, you have CrowdStrike, or SentinelOne or whatever the EDR solution is and say, “Okay, there’s a policy preventing that from happening, so we’re going to increase the complexity.” So now you’re going to have the awareness of your tools and the role they play at preventing that from happening, but still aware of the underlying risk knowing that, going back to my point earlier, those EDR tools and those compensating controls aren’t a direct remediation. Kind of a safety net preventing like if something bad does happen, hopefully something’s going to catch me and break my fault.
The last thing that we’re going to – that we just started doing, we’re supporting Kubernetes now. A lot of organizations have a lot of critical assets, or a lot of concern around breach points within Kubernetes. So we’re getting that capability nailed down. Then, I think that our manifest destiny kind of like looking at what’s probably next for us is around attack surface management. There’s a breach and attack simulation, there’s a whole category of solutions within attack surface management. It’s that dynamic discovery of likely breach points in our solution. You have to define them. You can define them with a rule or some discovery rule. But for us to say, “Hey, I don’t know where likely breach point is. We can’t really do that very well.” I think we’re going to start getting it more into attack surface management, so we can just say, “Click on –” I could type in a domain name. I could just say acme.com. Then we discover, hey, these are these exposed systems, and this is this vulnerable system over here, and this is facing DNS server that can be abused this way.
You could just say, “Okay. Run a scenario from the breach points that are likely to happen from a public facing environment.” I think if I had to like guess where 2023 is going to leave us from our capabilities. I think that that’s the next step for us to be able to understand what are likely breach points and then just define scenarios based off of that.
[00:46:26] CS: Awesome. So one last question for all the beings here. If listeners want to learn more about Paul Giorgi or XM Cyber, where should they go online?
[00:46:34] PG: Yeah, XM cyber.com. Easy one. I mean, I’ve got my LinkedIn profile as well. We are pretty active just most marketing events. We’re going to have a sizable presence at places like RSA and Blackhat and those kinds of typical conferences. So if you happen to be at a large conference, chances are, you’ll see a booth of ours that you can talk to us about, or you can go to our website, and we can do a demo, do a POC, whatever it is, a really fun solution to play around with.
[00:47:04] CS: Sweet. Paul, thanks for joining me today and giving our listeners an entertaining wrap up on the vulnerability ups and downs of 2022. It was great.
[00:47:12] PG: Cool. Thanks, Chris.
[00:47:13] CS: As always, I’d like to thank everyone at home network for listening to and watching the Cyber Work podcast on an unprecedented scale in 2022. We’ve doubled, nearly tripled our numbers and we’re delighted to have you along for the ride. Before I go, I just want to say, go to infosecinstitute.com/free to get your free Cybersecurity Talent Development eBook. It’s got in-depth training plans for the 12 most common roles including SOC analyst, penetration tester, cloud security engineer, information risk analyst, privacy manager, secure coder and more. We took notes from employers and a team of subject matter experts to build training plans that align with the most in-demand skills. You can use the plans as is, or customize them to create unique training plan that aligns with your own unique career goals.
One more time, just go to infosecinstitute.com/free or click the link in the description that I’m assuming is down there to get your free training plan, plus many more free resources for Cyber Work listeners. Thank you once again to Paul Giorgi. Thank you all for a great 2022, and thank you so much for watching and listening. We will speak to you next week.