Chris Sienko: Hello and welcome to another episode of CyberSpeak with Infosec Institute. Today’s guest is Michael Figueroa, the Executive Director of the Advanced Cybersecurity Center, or ACSC. We’re going to be talking to Michael today about aspects of Red Team operations, and specifically we’re going to discuss the ACSC’s first Collaborative Defense Simulation, which took place back on September 24th. It was an event that brought together 20 ACSC member teams and 100 participants, to explore the challenges that organizations face when responding to large scale cyber attacks and the opportunity to address these challenges through collaboration with other organizations.
We’ll talk about the group’s findings and about the larger topics of Red Team use within large scale corporate or government targets. Michael Figueroa, CISSP, brings to the ACSC a diverse cyber security background, serving at times as an executive technology strategist, chief architect, product manager and disruptive technology champion.
He promotes the optimistic security approach that emphasizes the need to better assist the users, operators, and business owners in protecting their critical assets, versus blaming them for being unable to properly configure and maintain complex technology.
His past work has spanned a broad security spectrum. In the advanced technology space, Figueroa has prepared cyber technologies for transition, managed research and development applying on security emerging technologies, such as deep learning and human analytics to security problems and led technology, design and development of an innovative and secure network and communications platform for cloud and mobile applications.
As an enterprise security architect, Figueroa have managed teams securing large scale system integration efforts for several US government agencies, including the Department of Defense, Homeland Security and Veterans Affairs.
Figueroa is a graduate from the Massachusetts Institute of Technology in brain and cognitive science and from the George Washington University in forensic science, concentrating on high tech crime investigations. Michael, thank you for being here today.
Michael Figueroa: Oh, thanks for having me, Chris. It’s a pleasure.
Chris: Great. Well, thank you. Let’s start out with a little bit about your security journey. You started in brain and cognitive science and then moved over into forensic science. Was security and tech always part of your interest or did you move into that avenue later in life?
Michael: I think a lot of us will talk about how security chose us. I was your basic, growing up, little small-scale hacker. I happened to get a Commodore 64 from my grandfather for Christmas one year.
And when you turn it on that first time and you just have that cursor blinking at you, now you sit there and thinking, “Okay, now what do I do with it?” So a friend and I started learning how to code, and this was back in the days when you learn how to code by pulling a magazine off a rack, right?
Chris: Yep. Yeah, you put in 1,200 lines of code and watch a little man jump across the screen.
Michael: That’s exactly it. We wrote, our first game, we called it The Dark Tower. It was a dungeon tower and it crashed when we found our first troll and we looked at each other and go, “Okay, well that’s done.”
Chris: Mm-hmm (affirmative).
Michael: You kind of elevate from there. You end up getting a modem and you start learning about, you know, we used to call them BBSs, right. [crosstalk 00:03:21]BBSs. So really that’s sort of initial hacking background, when I went to undergrad at MIT, it was like this amazing thing where all of a sudden we had this thing called the internet, and we could talk to people all around the world and things, but we used to find little hacks here and there of ways you could disrupt people when they’re doing their homework and all that stuff.
That’s really when you look at sort of the hacker background, a lot of us in the security community, it’s that prankster sort of exploration sort of background. You start learning how networks work and doing all this stuff. So I was always in a security domain space. I think it’s really a mindset thing, right? But with my cognitive science degree, I was really focused on the human computer interaction, which kind of turned into a, “How do we use technology to manipulate people to do the things we want them to do,” which is pretty much how software is designed, right?
It just so happened that with the advent of the web, I started looking at all of the misuse cases or the attack cases of how software could break, and force people to do things that they didn’t necessarily mean to do. So I just kind of naturally fell into security through just a dumb luck of my interest in the domain space I was working in.
Chris: Right. So how did your path go from standard defensive security operations into more offensive programs, like Red Team operations?
Michael: I think, I made note of the attack case side of things. I used to get called on by my colleagues in my company when I started in a consulting background. And consulting for the government, especially on a security side, people always think about it from a high security, from a clearance kind of thing, but we used to talk a lot about a disaster response, what happens if the grid goes down, all the critical infrastructure things. And I would get called on to sort of advise as to how would the scenario work, where that could actually happen, and how would that impact us from a computer perspective, or an IT perspective, or from a cybersecurity perspective?
So I think that my mindset was already always focused on the mentality of the attack, and that’s how it would inform the defensive actions and defensive designs that I would put in. So it really was a natural progression to learning how to communicate about the basis of attack, in order to help people and help organizations understand, not just how do they defend themselves, but how do they think about defense from that attack perspective.
Chris: Okay, and then I guess it’s worth sort of doing a followup on that. You can think from the mind of an attacker, but it seems like it never occurred to you to use your hacking powers for evil. Like hacking, it sounds like you’ve always been like, “I want to see how this breaks, but I don’t care to sort of break other people’s things except for in an offensive way.”
Michael: I guess, yeah there’s an ethical side to it, to mindset, right? A lot of us who have been in the space for a long time, we’ll talk about how we could leverage our powers for evil versus for good, and probably be a lot more lucrative at doing so. My spouse often would tell me back in the days when I was just getting into security, she would say, “Well, so what you’re telling me is if we go to a non extradition country, you can actually use your skills, and we could actually live really, really comfortably.”
Michael: There’s definitely part to that, but I suppose if you look at it just from a pure cultural and societal perspective, one is a very short term sort of thinking sort of thing versus another, you have the opportunity to sort of create a bit of a legacy I suppose, of really raising the baseline, and watching … Hopefully getting to a position where you can see people, you can see organizations actually doing things better for the people they serve, for their customers or for the citizens, that sort of thing.
Chris: Yeah. Security operations aren’t just sort of plugging this attack, plugging against that attack. You’re sort of raising the culture of, “There’s this constant arms race between hackers come up with new strategies, you come up with counter strategies,” but by coming up with these counter strategies, you’re sort of elevating the culture in that way.
Michael: Yeah, and I think that there’s … Who was it who said it recently? I think Bruce Schneier actually said something recently about the need for more people to move into more of public service roles, and more support community support oriented roles, and to leverage their expertise to really help inform, help educate, but help us do things better.
So I think that when you have that mindset, it becomes sort of that natural transition to start to say, “Okay, I might be able to do one thing and it helps me, but imagine being able to do the same thing and help many others.” I think there are many of us in the community, who that’s the way we think about how we can apply our craft.
Chris: So to that end, I want to talk today about a fairly large scale topic that you’ve been involved with for quite some time now, which is … Let’s start by explaining the ACSC Collaborative Defense Simulation. This happened back in September, is that correct?
Michael: Yeah, our first iteration was back in September. The ACSC is a 501(c)3 nonprofit membership organization. So what we’re looking for, we started as an information sharing, a threat sharing sort of cooperative. Really the basis was 10 years ago, and we became independent about seven years ago, with the objective of identifying ways that organizations can better learn from each other, starting with threats, but now about practice. How do we actually practice working together to solve common cause problems and to address common cause problems.
And one of sort of my passion areas was focused on these large scale attacks. You think about the Mirai Botnet attack or you think about a large scale sort of Ransomware attack, like NotPetya, or WannaCry. When you look at some of these large scale attacks, they’re verily indiscriminate and they can affect a large group of organizations. And sometimes, if you look at NotPetya I believe right, with Maersk, how badly it damaged Maersk’s logistical operations for shipping around the world.
We’re sort of limited by the scope of our own imagination, of what a given attack in this space can actually do. So we had this brainstorm at the ACSC, as we’re thinking about how can organizations learn from each other. Really it’s about operationalizing the communication channels, and that’s what the simulation was based on.
It was based on this concept that we can take these large scale attacks, put them into sort of a game or gamify the attacks in such a way where organizations can start to understand from a process oriented perspective, how to conduct themselves in such a way to defend themselves against the attack, but also keep the attack from spreading. So, that was really the basis of the simulation exercise that we did in September.
Chris: So, tell me about the actual process of it now. Was it done in secret and then the findings revealed later? If it’s a simulation, were people allowed to sort of like watch along the way? Who were the organizations that were involved? And what were some of the targets and weaknesses you expect to find?
Michael: Well, because of the framework that we have set up for our information sharing activities, all of our member oriented activities are done under an organizational NDA. So in this case, what we’re dealing with, is we’re dealing with executives working in very sensitive areas of their communications flows and their incident response plans, and they’re sharing amongst each other. So there’s immediately sort of a barrier as far as communications are concerned.
This time the simulation was conducted to basically experiment with a developing framework that we’re planning to develop over the next few years. So it was only members in the room, with the premise that we weren’t going to disclose or attribute any given weakness to any given member, right. So I can’t say who specifically participated in the exercise, though folks can go to our website and see our member list. It’s not a big list so you can get a census to the kinds of organizations that were participating. But it really was that opportunity for these members who … Generally our members tend to be organizations with very, very sophisticated security functions and security operations.
So we’re talking about organizations that know how to protect themselves against large scale attacks. Bringing them together into a room for a common cause attack and testing their response plans, not so much against each other, but in concert with each other to see how one organization does it versus another organization, but really what kind of resources does one organization have that another organization doesn’t have. Being able to do that sort of activity, that’s how we were sort of driving the findings of, what are the capabilities we have to stop a large scale attack. That’s really sort of the wonderful part of what it is we’re trying to achieve with the Collaborative Defense Simulations.
Chris: Okay. I’m still trying to sort of wrap my mind around the sort of minute by minute and methodology. So you have all these people in the room together here, and they are looking at what? What is happening sort of in the moment that they’re responding to, or that they’re sharing the resources with or whatever?
Michael: Sure. So this was based on a six month planning exercise with a committee of our members, of member participants, of employees from our member companies, who have conducted exercises at their companies. So when you think about the typical sort of Red Team or active drill exercise, these are the people who we brought in to help as far as the planning, and it was almost a competition. It was their opportunity to tell their executives, “This is how things really can mess up.”
So the six month process was about crafting a scenario that would warrant executive level attention, and we sort of used a combination of things. The Mirai Botnet attack against Dyn was one basis of the scenario combined with an outage of Cloud service provider, and that provider not communicating to its customers about what’s actually going on. It just goes out and everyone is blind as to why.
Michael: So really bringing up that executive level of exposure of when something really bad is happening, that’s when executives get involved in the situation, and then they’re trying to get information from their staff. So we designed the scenario based on those two attacks, but then during the course of the day, and it was, I believe it was a six hour total exercise, what the teams did is each organization brought in a team of people from various roles, not just security roles.
It was led by the CSO, but it might have the CIO, it might have the … We had communications managers in the room, we had legal counsel, we had some public sector folks from a disaster response or emergency response sort of perspective. And what we did is we broke it up into three sections over the course of two weeks of different events that were going on in different, what we call in the exercise world, injects, what are the various different triggers that are going to warrant a level of response.
So what would happen is we would set the baseline of, here’s the situation right now that you need to address, and here are the different triggers that your organization is seeing coming in, this is what’s being reported to you from your security operations folks. Okay, now what are your decision paths? What do you do? And then they get into a really discussion of process of the organization, and we go through three stages of that, where the attack is … You get initial indications of the attack and then you get an attack, but then you see that the attack has broader damage implications as far as your organization is concerned. So during the course of the day, that’s how things are going is, where we’re giving them the conditions for them to respond to at multiple points.
Chris: Okay, so you’re using a combination of attack and defense techniques. You’re using Red Team operations, you’re using penetration testing, you’re using other things. You’re just basically throwing every weapon at the wall, it sounds like.
Michael: Yeah, when you think of it from an attack case perspective or scenario oriented perspective, what you’re doing is you’re sort of mimicking what the different types of activities that might be going on in order to achieve an objective from an attack perspective, right? In this case, we are just saying it was a broad damage objective against the Cloud services provider. But when you think about, for example a Red Teaming sort of a domain space, it’s really encompassing it into a story that makes sense to the participants right. And when you’re doing a Red Team, sort of a traditional Red Team exercise, you’re testing against weakness as opposed to vulnerability, right?
You’re trying achieve a goal like, in Red Team exercises, I’ve managed for example, you’re trying to get to the customer database for example. So you have that objective and you have people who are focused on what is it to achieve this objective. What we did within this simulation is we mimicked in the scenario, what the attackers were doing in making progress to their objective, and then sending information to the executives for making decisions based on how the attack is progressing to achieve that objective.
Chris: I see.
Michael: And how do they even start having the conversation as to what the objective is, to understand whether or not their organization is a target or just collateral damage, for example. Those are the kinds of questions that executives need to be able to answer in these attacks.
Chris: So it’s basically like, they’ve gotten through this defense and this defense, what do you do now kind of thing, and then each person sort of says, “Yes, this,” or, “Shut down this part,” or whatever.
Michael: Yeah that’s right, and when you say this defense and that defense, you do it from an executive viewpoint perspective, right? When you’re doing it in a standard sort of, say a Red Team scenario, you’re doing it really at a technical level. “I’ve gotten through, bypassed this layer of the network segments, or I’ve gotten into this domain controller and now I’m moving laterally.” That’s the way you might see it from a traditional Red Team, but from our perspective, it’s at that executive view.
Okay, now we’ve seen our customers, and we’re getting reports that our customer database is out in the wild. Okay, how does an executive respond to something like that, for example?
Michael: Right? In order to direct their response team to go onto the dark web and find the forum where the data is, or things like that, or understand what does the data look like, so that they know what this potential source may have been, so they can even prove whether or not it can be attributed to their database. Those are the sorts of the questions that executives can answer in these cases.
Chris: Okay. And what does the sort of scale of this attack? It sounds like, I mean, are we talking like an attack on an enterprise? On a government agency? A nonprofit, a for-profit? Are these sort of concepts that can be applied across the board?
Michael: In our case, what we’ve been looking for is we’re really looking for those large scale attacks that have a community level damage impact.
Chris: We’re talking like nation state level attacks, and stuff like that.
Michael: It could be nation state level. I mean I hesitate because the Mirai Botnet of course, as we understand it based on reporting, was two teenagers in Israel who launched that. Now of course the infrastructure associated with that was much broader, but we don’t really look at it so much as a state versus criminal, perhaps designation. Really we’re looking at it from an impact potential perspective.
Chris: Okay, okay.
Michael: So, it could a simple attack that just happened to take down the wrong resource, or it could be something more complex or APT oriented.
Chris: Yeah, I guess I’m trying to get my head around the sort of the size of the resources. If we’re talking, hitting a public resource or something like that, you’re hitting it with something with huge budgets. Lots and lots of different attacks or like you said, it could just be the right two kids at the right time or something like that. So you’re trying all of that stuff.
Michael: Yeah, and we intentionally chose an attack on a Cloud providers, sort of [crosstalk 00:21:20] this, because one of the things that we’ve heard through our engagements with our members is that working with Cloud providers in an incident response exercise or an incident response event, is very, very difficult because a lot of times the provider will just send you to legal counsel so that you can argue over what the contract says they need to provide you.
And there are times, and we know this, there are times when a Cloud provider might go down and their dashboard doesn’t necessarily say that they’re down, right? So you’re sitting there going, “Well are they down or is there something wrong with my connection?
Chris: With my computer?
Michael: Or something like that. So from that perspective, we kind of normalize the damage impact across organizations of different levels of sophistication. You could be a small organization that just happens to be running a server, that all of a sudden goes down. Or you could be a large enterprise that’s outsourcing a significant business function to a Cloud service provider, like for example Netflix or something like that, right.
Understanding that the attack itself, the damage potential for that attack on a service provider in that regard, that damage potential can be different depending on the organizations, but a lot of organizations would be affected by that.
Chris: So what were some of the results of your findings? What were some of the large scale issues that you discovered using these Red Team exercises, and what things need to be done to fix these issues?
Michael: I think that, by the nature of this broader scale exercise, we’re sort of identifying weaknesses at the community level now, as opposed to within the organization. One of the really interesting findings was that even organizations that conduct exercises regularly within their organizations found gaps in their own response plans.
We had one organization where the CSO actually looked over at the communications’ manager and said, “Do we actually have a plan for when I talk to you?” And these things.
Chris: Right, right.
Michael: So a lot of times when they’re conducting exercises, they’re doing it from what we traditionally call, an incident response perspective, which is just in the security domain, and we don’t really address all the various different roles that suddenly become activated to support organizational response to an incident, right? So one of the key findings in that domain space was, how do they keep the incident response plan fresh when new people are coming into those non-security roles.
You may not know that your communications’ manager has left, for example, left the organization and somebody new has come in. They may not even know that they’re supposed to be talking to you, for example. So the importance of having those multidisciplinary exercises was really a key finding.
Outside of that, one of the most interesting findings that we found, is when you have an organization that is really, really sophisticated, has fantastic capabilities, down to say they can do reverse engineering of a malware and they can really do deep level assessments of malware, and figure out how things work, versus another organization that may not have that resource. If the organization that has the resource says in the midst of this incident, “Hey, we have a resource here. Can you send us what you see so that we can add it to what we have, and we can sort of work on it together?”
Michael: When they expose that resource and make it available, the organization that doesn’t have the resource won’t seek it out. So that sort of demonstrated to us a broader need to build stronger, not just communications but operational connection between the organizations, to really combat these large scale attacks to keep them from spreading, and really causing more impact as far as the community is concerned. So I think that’s probably the most interesting finding that we got out of this first simulation.
Chris: Okay. So what are some of the common methodologies that your Red Teams are employing? You said, you hit the Cloud security and so forth, but on more of a general level, what types of attacks are Red teams, in general using to sort of breach these types of defenses? Where do you start? What’s the, “You start here, move to here, move to here kind of thing, escalate things?”
Michael: Well, I think in this case, we were really focused on the whole going after the … What a Red Team’s traditionally going to do when seeking an objective is they’re trying to find the shortest path, the path of least resistance to meeting that objective, right? And since so many services are moving to the Cloud, then what we were really focused on is then the first step is, let’s say you want to bring down a Netflix for example. They’re not necessarily going to target Netflix. Maybe you’re going to target the Amazon server farm in Northern Virginia, right?
So when you’re looking at that from a Red Team scenario oriented perspective, that path of least resistance, you’re going to look at where an organization is going to be weakest and where their ability to respond is going to be poorest, right? So in thinking of it from that perspective, we pick the Cloud services provider, because that is a problem area from a business process perspective, and that’s what communicates to executives as to where cybersecurity really starts impacting them from a business risk oriented perspective, to help expose some of those things.
Then it’s a matter of thinking from the attacker’s perspective. If I’m targeting one organization, then I might try and bring down the web services provider, but if I actually want to ex-filtrate data, somehow I need to jump the gap from the service provider into the organization. So we came into the scenario that it wasn’t just the provider that’s going down, but now we’re starting to see servers going down, and I mentioned before a company starts seeing customer databases that were housed in that service provider, being seen on dark web forums and selling customer databases.
Michael: So the Red Team tactic there was jumping the hypervisor. They were going from getting into the server that’s running the various different hosts that might be running for multiple organizations on one piece of physical equipment, and they jumped into the virtual servers that were in the environment, because of a Zero-day against the hypervisor that provider was using. So you can see how we sort of gained how the attackers are technically moving through, but turn into this business context for testing decision making in those cases.
Chris: Wow. One of the more lurid aspects I think, when people think of Red Team operations is the sort of aspect of physical breaches. Now it sounds like you were able to sort of penetrate the Cloud service defenses and stuff through using purely technical means and stuff. Is the sort of the physical aspect of Red Teaming kind of overplayed? Because you hear stories of kidnapping the CEO or breaking windows or forcing your way in and things like that. Is that really a big part of it or is that just sort of becoming a sort of urban myth?
Michael: I think that, I wouldn’t say that an organization shouldn’t do its due diligence from a physical security perspective. I’ve certainly seen plenty of instances where, I’ve been in instances where I’ve been able to access physical resources I shouldn’t be able to have access to, right.
When I was a CSO, I focused a lot on the physical environment because I saw it as one of my big weaknesses. But now that we’re moving so many resources outside of organization control, like in Cloud, like if we’re running infrastructure of a service at a Cloud services provider or a lot of our services are now software as a service, so we’re just logging into a site. I think that the physical access for most organizations is much less critical then the credential access, and credential access, that really is the keys to the kingdom these days.
One of the things that we’ve heard in my two years in with the ACSC, the one striking theme of organizations from a defense perspective is phishing is the vector. phishing is the vector for just about taking down anything, and it’s usually because of credential theft. There’s nothing physical about credential theft anymore. So, I tend to think that, as we’re moving on that physical thing, it’s nice to be able to say, “Oh, I broke into this defense contractor who’s supposed to have … They’ve got security guards and turnstiles. They’ve got a badge and all of that stuff-“
Chris: All I did was walk through their front door.
Michael: All I did was walk, or walk through their loading dock, you know, I’ve done that one. There’s a lot of glitz to that, but I think it’s mainly because that’s what business people understand.
Michael: That’s sort of the connection you can make to the business decision makers, of how important security is, because cyber is such an abstract domain, they just don’t really get that. So, I think that the physical side, it’s still of course important in certain industries, but for most organizations, I don’t think it’s as critical as some of these other areas that they should be focusing on.
Chris: So to wrap up a little bit, what is the future of the Collaborative Defense Simulation? Is this going to be a regular exercise? Is it going to be yearly? Will you be making adjustments based on this year’s results?
Michael: Yeah. I just came out of a board of directors meeting for the ACSC yesterday, where we were talking about that very thing. The simulation is going to expand. We’re going to, at least in 2019 timeframe, our plan is to conduct another major simulation with our members under the same conditions that we conducted before, and we’re going to test some different things. Like for example, we had all the organization teams working together in the last simulation. In the next simulation, we’re talking about, maybe what we do is we put the same roles together from different organizations.
So now we’ve got the communication managers at one table and we have the CSOs at one table. We have the analysts at one table, et cetera, a legal counsel at a table, so that they can sort of compete against the other roles as to how they’re leveraging what their needs are, and how they’re meeting their needs in a response, and looking at where the conflict comes between roles.
So I think, that gives us an opportunity to really explore what would we consider the leading practice for that role to conduct their behavior in a given incident. So we’re going to continue to use our members to sort of test those different aspects of how we make those business level decisions around incident response, large-scale incident response. But then beyond that, we’re planning on leveraging what we’ve learned to start building more community level simulations, simulations that go outside of our membership.
We’re looking at … One of the big questions that came out of the last one was about how would the public sector respond in these situations. How would the state emergency respond when all of a sudden people are calling law enforcement to talk about how their email isn’t working or something like that. You start getting 911 calls about this incident, and how does public sector start leveraging what industry knows in order to do that. How does the federal government and the intelligence capabilities start to feed into this, “Oh, I’m hearing chatter that this is a state sponsored sort of attack versus it’s a criminal level attack. We’re hearing various level of human intelligence associated with that.”
So we want to do more simulations and we want to start getting deeper. This year we’re really focusing on that tabletop aspect of decision making. Moving on into the next year, sort of 2020 timeframe, 2021 timeframe, we want to start building it into an active drill, really into that hands on keyboard sort of response and build in what you would typically think is a Red Team, Blue Team oriented component of actually battling an inactive sort of drill environment.
So I see a lot of exciting opportunities for us here to to build that engagement and really elevate our knowledge, as to how do we work together to stop these really large scale attacks instead of just confined within our own sort of organizational boundaries.
Chris: It sounds like we’ve got enough to talk about and possibly in a future episode. We’d love to have you back if you’d be interested in talking again sometime.
Michael: Absolutely. I’m always happy to talk about this stuff. It’s a privilege for me Chris.
Chris: So if our listeners are interested in learning more about the ACSC and possibly becoming members themselves, where would they go? What should their backgrounds be? What should they be interested?
Michael: Well mainly, our core constituency really is the CSO and the security director, the security operations director sort of level, though we’re building out more networks for the communications folks and legal counsel and all of that. But you can go to acscenter.org and take a look at some of our information, and if anybody’s interested in learning more about sort of this program and some of our other research around how we work better collaboratively in defense against common threats, common weaknesses, they can always send us in for an email at email@example.com
Chris: Michael, thank you for joining us today.
Michael: Great Chris, thank you so much.
Chris: And thank you all for listening and watching. If you enjoyed today’s video, you can find many more of them on our YouTube page. Just go to YouTube and type in InfoSec Institute to check out our collection of tutorials, interviews, and past webinars. If you’d rather have us in your ears during your workday, all of our videos including this one are also available as audio podcasts.
Please visit infosecinstitute.com/cyberspeak for the full list of episodes. If you’d like to qualify for a free pair of headphones with a class signup, podcast listeners can go to infosecinstitute.com/podcast to learn more. And if you’d like to try our free security IQ package, which includes phishing simulators, and as Michael said, phishing is the vector, you can use to fake-phish and then educate your colleagues and friends in the ways of security awareness, please visit infosecinstitute.com/securityIQ.
Thanks once again to Michael Figueroa and thank you all for watching and listening. We’ll speak to you next week.