The current state of artificial intelligence in cybersecurity

Eric Stevens, vice president of engineering and principal architect at ProtectWise, discusses the current state of artificial intelligence in cybersecurity and the company's recent report on the topic, "The State of AI in Cybersecurity."

– Get your FREE cybersecurity training resources: https://www.infosecinstitute.com/free
– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

Chris Sienko: Hello and welcome to another episode of Cyber Speak with Infosec Institute. Today's guest is Eric Stevens, vice president engineering and principal architect at ProtectWise. Eric is going to tell us about a recent white paper published by ProtectWise entitled The State of AI in Cybersecurity. Eric Stevens leads engineering, technology, architecture, delivery, and infrastructure of the ProtectWise cloud delivered network detection and response platform and related technologies. Eric joined ProtectWise as a founding team member and the company's principle architect. He drives innovation and security for IT and OT environments. He's an experienced architect for distributed systems design, data processing at massive scale and cloud computing. He holds a BS in computer science from Millersville University of Pennsylvania. Eric, thank you for your time today.

Eric Stevens: Thanks for having me.

Chris: So let's talk a little bit about your security journey. Where, where did you first get interested in computers and tech and how did you transition specifically to security and engineering?

Eric: I'm not sure I ever really transitioned into it. I fell in love with computers the very first time I ever touched one at my best friend's house when I was probably seven years old. And he had a Commodore 64 and we learned how to write MadLibs, so it asks you a little questions and then made silly sentences. I mean the very first experience I ever had with computers was programming. So my cousin gave me an  86, I did basic, I got involved in PBS's. When I got to college, I started doing packet capture and fuzzing and trying to exploit systems. I learned a lot from all my mailing lists and contracts. So man for me, it's always been there, it's been in my DNA.

Chris: You're a lifer.

Eric: I'm definitely a lifer. When I destroyed computers, as I was understanding them better. I learned the hard way what partitioning means.

Chris: Oh. Also the reference to Commodor 64 means we are exactly the same age. So let's start out by defining our terms here. When we were talking about AI and cybersecurity. And so when we talk about AI as it pertains to site security and cyber security in AI related enabled projects, what exactly are we talking about? What kind of tasks is artificial intelligence doing with regard to online security?

Eric: It's a broad term and different people mean different things. I mean as a very strong definition, but how people interpret that is a bit subjective and it's unfortunately subject to a lot of marketing confusion. A lot of people calling things AI or ML that may not justify the term. I think though, when most people think about AI, they're thinking about some of the more statistical disciplines like machine learning, artificial neural networks, deep learning and other things like that. The term is broader than that, obviously. We have things like chatbots that have no automated learning, technically count as AI. But I don't think it's [inaudible 00:03:09] from the perspective of the research that we did on this front. We left it up to interpretation of the respondent to respond according to what they think of when they think of AI. And I think that those statistical [inaudible 00:03:24] are probably where most of the respondents were coming from.

Chris: So what benefits do AI enabled security products have over other security packages? What type of real time strategies does AI allow you to utilize that might not be otherwise available?

Eric: A lot of it's about enabling humans still. It's creating associations that are obvious to humans. But then, identifying anomalous behavior, insider threat where you may not be able to have a human review, an individual's emails from a privacy perspective, but you can have AI get in there and do that and then raise a flag when you see problems. And then probably from my personal perspective, some of the most interesting stuff is getting into automating investigation and responses and especially improving an analyst's ability to that pulling together the right information and saving them the time of having to go find that information down the road.

Chris: So what types of companies these days are using AI enabled security products and why?

Eric: It's not strongly industry or size dependent. The companies of different sizes. I mean our research focused on companies that were a thousand employees plus, so we weren't looking at SMB. There's not a strong size element and it passed that threshold there. It tended to be more based upon the alert volume. And we didn't even find in the data that it was based on the maturity of the security team, even small security teams were as likely to be implementing AI as large security teams. But, the bigger driver was really about what their alert volume is. People with more alert volume are investing more in AI. It's an alarm VT problem there perhaps that they're trying to address.

Chris: Okay. So we're talking today about a white paper that ProtectWise posted about statistics about AI enabled security measures. And you noted that companies on the whole, their executive class more than the security team members themselves are the primary drivers of enthusiasm for AI enabled security. Why do you think this is?

Eric: I think there's a number of driving factors behind that. Some element of it is thought leadership. These are people whose function is to be leading the charge and making sure that the organization is modernized and up to date and definitely don't want to miss an opportunity there. They want to take advantage of the tooling that's available to them. There's always a little element of CYA covering their bases, making sure that they can answer responsibility to their, their own superiors that, "Hey, are you guys using this, this hot new thing that I heard about?" It's like, "Oh yeah", we want to be able to talk about it informatively and be able to say with high confidence that what value you get out of it and other things like that. So, I think that's part of the function is to be the people that are out there in the lead and driving them forward.

Chris: Do you think that there's a reticence amongst the in the trenches type security people about AI based security fears that it would incur redundancies or layoffs in the department?

Eric: I'm not sure that most security analysts today are worried about redundancy and layoffs. I think that there's the state-of-the-art in at least in cybersecurity, AI is still very much about empowering humans. It's about addressing the cyber security skills shortage, other things like that. I don't think the in the trenches people are worried about that replacing their jobs at this point because there's a lot of problems with AI still. There's a long way to go. There's a lot to do before AI would be in the position to begin replacing jobs in this industry. In other industries, that's probably less true.

Chris: It seems like there might be a perception, at least when I was spitballing these questions around the department, it sounds like there might be some misunderstanding about what exactly AI based security can actually do. From what I'm hearing, it sounds like it's nowhere near at a point where it can replace an actual security person. It's more automating low-level processes and stuff.

Eric: Yeah, I think that it's somewhat is automating a low level processes. I mean if you take the very broad definition of AI, things like automated runbooks, they probably technically qualify as AI. There may be some concern about that sort of stuff, but you're talking about replacing level one analysts, not somebody more senior and I think the skill shortage is still such that, my opinion on this, is that the skill shortage is such that that there's so much more demand for those positions and there are a number of people to fill those positions. We're trying to figure out how to tap the next generation, as a side note, ProtectWise has some really interesting research and thought leadership in that space about how to help bring in the next generation [inaudible 00:08:39]. A little bit off topic for today, but-

Chris: That's okay. We're very interested in the skills gap right now. We're talking to a lot of people about that. So that fits right in there.

Eric: Yeah. The skills gap is the biggest concern. And I think from my perspective, the skills gap is part of why analysts wouldn't ... The [inaudible 00:08:59] wouldn't worry about that. I mean, I'm not saying that nobody does, but I think that if that fear is out there, it's a little bit unfounded at this point. And definitely cybersecurity AI is nowhere close to being able to do that except for very basic functions and instead, I think what you see is AI is in a position to empower those humans by taking away some of that mundane work. When you're investigating an incident or a potential incident, you're trying to figure out how bad is this? Is this bad? What's going on here? A lot of AI is in addition to help pull together the relevant data for you rather than you having to go find it yourself.

Chris: It's not actually doing the processing or the interpreting or the suggesting of solutions or anything.

Eric: Yeah. I mean that stuff is out there. That type of stuff does exist out there. It's not trusted enough for people to really depend on it from an automation perspective. I forget the numbers. I have them here ... 54% of our respondents in our survey said that results are inaccurate and untrustworthy and need human review. So that's a high threshold.

Chris: Okay. So what are some of the big security risks currently plaguing AI based security solutions? Are there any particular types of vulnerabilities or undeveloped aspects of the AI that hackers are able to take advantage of that they might not get through like a human firewall?

Eric: Yeah, absolutely. I mean, probably some of the biggest concerns and the things that are most difficult to control in the current environment and current ecosystem is there's absence of controls are onto something called adversarial AI or adversarial machine learning where if you are able to synthesize some data that's being considered by a machine learning algorithm. For example, especially if it's a labeled algorithm, you can poison the results. And you can cause it to come up with incorrect answers and that those incorrectness could be false negatives.

But possibly potentially worse, depending on motivation of the attacker. It could be false positives where you've successfully convinced that the firewall that your payment processing systems are an attacker because no humans involved in there. A human would look at that and say, "Whoa, I'm not going firewall off that system. It's way too critical." But AI isn't going to have that objective judgment or that subjective judgment to be able to do that and it's just going to potentially cause disruption in business continuity and other things like that. The false negative problem is also pretty ... Has false negative issues. So it's a variation on a theme as far as that goes.

Chris: Yeah it sounds like the automated sprinklers going on because someone microwaved their burrito too long or something like that.

Eric: Yeah. That stuff is-

Chris: Your strong solutions to non-interesting issues. So, within that framework, what are some of the limitations of AI based security? Is there a human element involved where if you're setting the AI to look for the wrong things, they're going to give you back the wrong results? Are these things that need to be considered when you're implementing?

Eric: Yeah, I mean the false positive rate is a big deal from a limitations perspective. Most respondents felt that, or just shy of half of respondents felt that it's very high maintenance in the sense that it's very difficult to create rules in AI that could may be harder than just not bothering.

There's a huge false positive problem. A majority of people felt that way. Almost two thirds of people felt like no AI solutions on the market today offer a significant advantage for zero days and they're getting a lot of mixed results. It's still very hard to use. It's hard to reason about the results. A lot of machine learning algorithms present a judgment on something and they can't really tell you why. Not in a way that a human can understand it. These signals happen to correlate together and indicate with some percent confidence that there's a problem here, but I can't really tell you why because I just know that when those happen together, it's an indicator of a problem. It's often in actionable from a tremendous scale.

Chris: Okay.

Eric: So our overall, our organization is sending more bites to China. What can I do about that? There may not be a lot.

Chris: Right. So in a sense of the learning curve of AI, is there a possibility that some of these deficiencies will naturally fill themselves in over the course of time? If your AI system is giving back false positives, can you retrain it to have more discernment or is this just a blind spot you're always going to have to sort of interpret with humans?

Eric: I think that we will improve that ability over time as an industry as a whole. I think as we get access to larger and larger data sets, we can do a better job of protecting against that. I think that there's also a matter of setting your confidence thresholds appropriately. If you're talking about machine learning based AI for example, you have a slider that you can say, only alert me about this when you go above this level of confidence. The false negative problem means that every security vendor would prefer to have a false positive because nobody's too upset when you say, "Hey, this was bad and it wasn't." But they get real upset when say when you say, "Hey, this was good and it wasn't." I think there'll be better controls given to analysts to be able to set their own thresholds and make the decision on the adopted risk versus the alert volume that they want to see out of there. I mean, alert volume's a big concern.

It's unusual in the research data that we have shows a unusual result in that companies that invest more in AI tend to be those companies with higher alert lines initially. But then, one of the number one complaints about AI is its alert volume. So, it may not be helping the reason, addressing the fundamental problem that this why they're seeking out AI in the first place.

Chris: So from a 10,000 foot view, what improvements would you like to see in AI based security programs in the future?

Eric: There's a lot.

Chris: Okay.

Eric: I think data collection is one of the biggest things. Data is the new oil and companies that have access to really large data sets are going to do a much better job in AI. But the visceral improvements that we want to see as panelists is, we want to we want to see that false positive rate come down. We want to have better confidence. When this thing says it's bad, I want to have really high confidence. In AI at large, false positives aren't as much of a problem. If you ask Google images to describe what's in a photo and it says it's a banana but it's a cat like, "Oh that's funny", and we move on with our life. And there's low consequences there.

But in security, the consequences are a lot higher, a lot more visceral. So, our threshold for acceptable failure rates needs to be a lot less tolerant. The ways that we can do that is understanding the problems better. AI, especially ML based AI machine learning based AI, benefits the most when we understand the problem space really, really well. And I would say we often don't understand the problem space as well as we should insecurity industry. And it makes it hard for us to use AI to address problems when a lot of our response to those problems are our system gut check, gut feeling about things and there's a lot of subjective analysis that's still being done rather than a really deep dependency on data and other things like that. But when you're talking about AI, without the data, you're not going to be successful.

Chris: So going back to my other question before, it doesn't sound like it, but do you see other industries utilizing AI to find solutions to human AI questions without simply laying off a swath of the skilled workforce? I think we're all worried about Skynet and so forth, but do you think that AI is even being considered for that sort of role? Or is it really always going to be this subservient, low-level processing role?

Eric: There's a New York times article that I actually learned about from you, thank you for that, that talks about this and I think behind closed doors, yeah, people are absolutely looking for machines to replace humans wherever they can get away with it. That's the dirty secret. People don't talk about it publicly, but from a corporate profitability perspective, there's always going to be a drive to improve your profits. And so if there's a way to do that, even if the places displaces humans, then they're going to do that. I think some job functions are more susceptible to that. We see not even, non-AI based solutions to reducing workforce count in things like supermarkets with automated checkouts and ordering kiosks in a fast food joint instead of talking to a cashier and it's not a problem that's unique to AI.

It's a problem that is ... It's been around for [crosstalk 00:00:18:49]. It goes all the way back to the initial industrial arts revolution and we're talking now about a fourth industrial revolution. And that one I think is using computers to replace people even more than ever before. That sounds really depressing but I'm optimistic because the job functions that can be replaced by AI and by computers today, and it has been going on for a while, people that are being displaced by that eventually find better options. Their overall quality of life has improved. We no longer have children losing fingers in knitting machines. Life is getting better on the whole and it creates a temporary disruption for some people. And it's unfortunate but it's inevitable. I think even if we don't want it to be the case, it's inevitable that we're going to end up there anyway because corporate profitability demands.

Chris: Yeah. So in the meantime, as jobs get shuffled around and so forth, are there any particular skill sets that you recommend to employees who might be fearing the rise of the machines? Whether problem solving or logic things, not necessarily just tech things, but how do you make yourself more desirable against automated functions as an employee?

Eric: It depends on what your function is and what your tolerance is for changing your career. When we talk about cybersecurity in particular, I think that AI is still very much about enabling analysts. I don't think it would in cyber security. That concern is a long ways away. AI has so much to go before those problems become real problems for individuals. In other industries, if you are a cashier at a fast food joint and you're being replaced by a terminal, then your options are going to depend on what your capabilities are and other things like that. If the transition into this stuff is slow and it has been so far, then natural solutions present themselves to people. As they find, based on what their capabilities are, they find alternatives. But going again back to security specifically, the humans are being empowered by AI rather than being replaced by AI.

Chris: Do you see any security issues on the horizon for other uses of AI? Not just those that automate security as you say with automated kiosks and so forth. What other security issues should be watching out with regard to that?

Eric: Actually, one of the things that I probably lose sleep over is offensive use of AI. Where hackers begin to deploy AI against us and now you have machine versus machine. It's a little bit fantastical, but it's not impossible. The attackers will begin using that stuff. I think we're less than five years away before we start to see major initiatives that are AI driven offensive attacks. I think we can probably see it first in the form of things like spear phishing and other things like that, where a bot can collect enough data about a person to create a really believable phishing attack against them. There's a chance that as data breaches and other things become more public, attackers will be able to use AI for passive vulnerability assessments.

Say, "Hey, here's the weak points in that company based on a breach that they had before. I can learn more about them." And the attackers get access to enough data, especially state-sponsored attackers have access to enough data to be able to passively identify problem organizations. But beyond the offensive use of AI, I think people are possibly aware of it, but AI is increasingly entering our personal lives. How many people have an Amazon device in their house or Google Home or the Facebook one. The AI is entering our everyday lives in a very real way. Most people have Siri or Google's assistant on their phone. It's all around us. And there's a significant amount of privacy concerns that generates as that becomes ever more deeply embedded in our lives.

Chris: So as we wrap up today, tell us a little bit about your company ProtectWise. What are you currently working on as VP of engineering with ProtectWise?

Eric: So, ProtectWise, I'll start with that part of the question. We're a network detection and response company. We deliver entirely out of the cloud. What we focus on is detecting the new vulnerabilities as well as unknown attacks that we see in our customers networks. We also offer a rich investigation and forensic response tools. We started as a network packet capture and we store it [inaudible 00:23:47] all the packets and make them searchable and immediately retrievable. We created a whole set of response tools. We have the ability to openly clearing network history and look for patterns and we also go very deep on visualization. We're big believers in shedding the norm from a visualization perspective. We don't like bar charts, we don't like pie charts. We like to improve the way that an analyst can interact with data.

On the second half of that question, what are we working on, we're working on going deeper. We always need more data. We have a lot of data. ProtectWise has a significant data lake. We use it all the time to improve our detections and do investigatory analysis and threat hunting. But we want to go deeper. We want to get deeper into networks, we want to cover more of customers' networks. And then especially when we want to use those things in order to improve our ability to detect uknowns. And so that's where we're spending a lot of our engineering time these days is getting a lot better about rigorous scientific approach to detecting the unknowns.

We don't want to just go out there and fall foul of the marketing trap of saying, "Oh, we have behavioral analytics, we have this and that. [inaudible 00:25:11] classic place to China function."

Chris: Right.

Eric: The AI is not yet in a place where security organizations should be able to depend on AI exclusively. We need other solutions in tandem with those. And that's what we're trying to provide is the combination of this. We want the traditional solutions and we want the AI solutions, we want to bring them together in a meaningful way to be able to enable people. Because as it stands today, companies can't depend on AI like they maybe would want to.

Chris: So if people want to hear more from you about all this, Eric Stevens, do you have a Twitter or blog or anything anywhere you want to send people or some website?

Eric: Oh yeah. I mean there's protectwise.com is our corporate website. People can try out our product on there. We have a test drive that you can pop on and you can sign up for a test drive and get a chance to play around with it for a few days and get a sense for how we think about visualizations differently and how we think about detections both for the known and as well as for the unknowns. I have a Twitter, I'm not especially active on it. It's, it's Reteric, R-E-T-E-R-I-C. But I think our ProtectWise website is where to go to find out a lot more about our company.

Chris: Perfect. Eric, thank you for being here today.

Eric: Thanks very much, Chris.

Chris: All right and thank you all for listening and watching. If you enjoyed today's video, you can find many more on our YouTube page. Just go to youtube.com and type in InfoSec Institute to check out our collection of tutorials, interviews and past webinars. If you'd rather have us in your ears during your workday, all of our videos are also available as audio podcasts. Please visit infosecinstitute.com/cyberspeak for the full list of episodes. Podcast listeners can also go to infosecinstitute.com/podcast to learn more about our current special promotions. And finally, if you'd like to try our free security IQ package, which includes phishing simulators you can use to fake phish and then educate your friends and colleagues in ways of security awareness, please visit infosecinstitute.com/securityiq. Thanks once again to Eric Stevens and thank you all for watching and listening. We'll speak to you next week.

Join the cybersecurity workforce

Are you a cybersecurity beginner looking to transform your career? With our new Cybersecurity Foundations Immersive Boot Camp, you can be prepared for your first cybersecurity job in as little as 26 weeks.

placeholder

Weekly career advice

Learn how to break into cybersecurity, build new skills and move up the career ladder. Each week on the Cyber Work Podcast, host Chris Sienko sits down with thought leaders from Booz Allen Hamilton, CompTIA, Google, IBM, Veracode and others to discuss the latest cybersecurity workforce trends.

placeholder

Q&As with industry pros

Have a question about your cybersecurity career? Join our special Cyber Work Live episodes for a Q&A with industry leaders. Get your career questions answered, connect with other industry professionals and take your career to the next level.

placeholder

Level up your skills

Hack your way to success with career tips from cybersecurity experts. Get concise, actionable advice in each episode — from acing your first certification exam to building a world-class enterprise cybersecurity culture.