Cognitive biases in security decision-making

Kelly Shortridge, VP of Product Strategy at Capsule8, and Cyber Work podcast host Chris Sienko discuss how for introduce security teams early into the product development process, as well as cognitive biases in security decision-making at all levels of employment from analysts to CISOs.

– Get your FREE cybersecurity training resources:
– View Cyber Work Podcast transcripts and additional episodes:

Chris Sienko: Hello and welcome to this week's episode of the Cyber Work with Infosec podcast. Each week, I sit down with a different industry thought leader and we discuss the latest cyber security trends, how those trends are affecting the work of infosec professionals, while offering tips for those trying to break in or move up the ladder in the cyber security industry. Kelly Shortridge is the VP of Product Strategy at Capule8 and has been on the move for the last few months. Her presentation at this year's Black Hat was titled Controlled Chaos, the Inevitable Marriage of DevOps and Security. She also presented To Ere is Human, The Complexity of Security Failures Hacktivity. She spends her days immersed in the mindsets and procedural successes and failures of the information security industry and we're thrilled to have her as a guest on today's podcast to talk about some of these reports. Kelly Shortridge is the Vice President of Product Strategy at Capsule8. Kelly is known for research into the application of behavioral economics to information security and has spoken at conferences internationally, including Black Hat U.S.A., Velocity, AusCERT Hacktivity, Troopers, and Zeronights. Previously, Kelly served in product roles at SecurityScorecard and BAE Systems after co-founding IperLane, a security startup that was acquired. Kelly began her career as an investment banker analyst at Teneo, covering the data security and analytics sectors. Kelly, thank you for being here today, and I apologize if I beefed any pronunciation of any of those companies.

Kelly Shortridge: It was perfect. Thank you so much for having me, Chris.

Chris: Okay. Well, thank you so much for being here. To start at the very beginning, we'll ask you what we ask everyone. How did you first get started in computers and security? So, we mentioned that you started with a finance background. What drew you over to the tech and security sector and when did that happen?

Kelly: Yeah, it's a good question, when did I come over to the dark side? So, I did start my career as an investment banking analyst, specifically working on MNA, as well as capital raising assignments, and only a few months into my role a Teneo, I was tasked with getting smart on the Infosec industry since there was tons of deal activity growing. This was back in the 2012 time frame that really only bloomed over the next few years. So, I covered information security from the investment banking lens for about two and a half years before deciding to YOLO and found a security startup.

Chris:  Wow, okay. So, they basically sewed the seeds of you leaving the company by asking you to do something that they thought was maybe a temporary thing. Look at the security thing, and then you're like, "okay, "I'm out of here."

Kelly: Exactly, yes. My cognitive spark really lit up when I would study information security stuff, and it was, in some ways, a natural fit given what I had studied in undergrad, and also I had always kinda had a predilection for tech anyway.

Chris:  Okay, what did you study in undergrad that connected it to it?

Kelly: Yeah, so I actually come from a liberal arts background and I studied economics, particularly anything regarding behavioral economics. I had been into behavioral economics since I was, I don't know, 10 or 11 years old.

Chris: Can you unpack what that is for our listeners?

Kelly: Sure. So traditional economics assumes rational actors and all of this theory where behavioral economics does have some theory, but it's generally around an experimentation. So, looking at how people actually behave, not just how we think they're going to behave, assuming certain principles of rationality.

Chris:  Oh, okay. So, I can definitely see where that would prepare you well for people acting possibly irrationally in security sector and online.

Kelly: Exactly.

Chris: So, what are some of the highlights of your security career? What are some of the jumps that you made in terms of experiences or knowledge or research or job changes that got you from here to there? You said you started a startup pretty quickly there. So, where did you get the wherewithal and all that for along the way?

Kelly: That's a good question. As I mentioned, coming from a liberal arts background, it was a pretty massive knowledge leap to go from finance and economics over to information and security, 'cause really I was only technical in the sense that I knew how to build websites and I knew how to build gaming computers, but I didn't really understand systems or infrastructure or anything like that. So, before I even switched away from finance, like I said, it was pretty obvious to me that there were tons of cognitive biases in Infosec and they were manifesting ways that made the market pretty efficient, which meant that the problem area was really interesting to me, and it really led to the question in my mind of, "okay, how do we do all of this better? "Because clearly, there's probably a more efficient way "for us to be architecting security strategy, "and certainly the solutions that we have in place." So, my early research was really around those cognitive biases and behavioral economics in information security. Next was the, "okay, how can we do it better?" which led to some of my research on resilience; and during that research, what I realized was that resilience as a topic was gaining a lot more traction on the the-- Well, at the time it wasn't called SRE, but the DevOps, ops infrastructure operation side, and they were already making strides, and Infosec hadn't really been paying attention. So, that ultimately led me to then look at the unification DevOps and Infosec, and during that period of time, I think what I keep coming back to is Infosec's fundamental problem is this misalignment of incentives, and I think a huge part of that is that the people who are building systems are responsible for securing them, which just creates all sorts of moral hazard issues. Because the DevOps movement is about aligning responsibility and accountability, it kinda felt like, "okay, maybe we should start considering "some of these principles and information security, as well," and that's where a lot of my research is today.

Chris: Okay. I think that's worth noting or stretching out for our listeners here is that you didn't just study security to study security or 'cause you wanted to do security. You started from certain assumptions and certain questions and you learned based on, "I need the answer "to these specific questions," and learned that way. So, I think that's an interesting strategy that I don't think we hear very often on the show.

Kelly: I personally wish more people did that, really going back to what is the actual problem space? What are the assumptions that maybe we're taking for granted that have always been there? And, maybe we should course correct a little bit. I wish people were a little more critical about the status quo in the industry.

Chris: Yeah, and also the fact that you were already in security and then also studying to answer these certain questions about security. I think there's another, not fallacy, but issue that early students have is that they figure, "once I've learned all the stuff, "then I can make my entrance into the security world," whereas it might be good to start in there in some capacity, any capacity, and then learn, and as Fred Bradbury said, build your wings on the way down.

Kelly: Definitely. I agree with that.

Chris: So, what do you do as VP of Product Strategy for Capsule8? What does your average day look like in terms of projects, hours, expectations from clients?

Kelly: Yeah, so my goal, it's definitely a nebulous title, but really my goal at Capsule8 is to ensure that our product provides the best visibility in production for Linux production systems, basically making sure that the security teams are happy 'cause they have the detection coverage they want, that ops teams are also happy because we aren't interfering with their production performance. So to do so, I really have to deeply understand the needs of our communities. Obviously, the security community I've been studying for a while, and I think this heart beat of the market and understanding that informs where we need to enhance the product, what opportunities we should pursue, and how we should ultimately communicate the value to our customers. So, as might be obvious from that description, there's really no average day for me, other than I'll probably have bacon at some point and then some tea with collagen in it.

Chris: You heard it here first. Bacon runs the security industry.

Kelly: It's true, yeah. So, some weeks I'm prepping for keynotes like I was before Velocity and Hacktivity last month, prepping for podcasts, webinars, but some weeks I'm labor focuses on product road map and strategy where I'm writing blog post requirements documents or helping out with some sort of prioritization decision.

Chris: Okay. Yeah. So, do you feel you're on the clock all day long? Do you keep regular hours? Do you have things that are keeping you up at two a.m. That you have to deal with and things like that?

Kelly: No, I'm almost always sleeping very soundly at two a.m.

Chris: Okay.

Kelly: I at least get eight hours a night, sometimes it's nine. I think sleep is important, and there's just so much research that people make really terrible decisions when they're sleep deprived and probably if you're helping with product strategy, you should make good decisions. So, I definitely think it's important to try to maintain some sort of separation between work and home life. Most of my home life is more like personal work, whether it's research or my own blog posts on my personal blog. So, I'm definitely not gonna argue. I'm not perfect at separating out the two.

Chris: Sure.

Kelly: But, I would say that I've definitely learned the hard way that if you are thinking about it 24/7 and you make it too much of a core component of your fulfillment in life, you're going to probably ere on the side of taking too many risks, being way too nervous about everything you do, and again, that diminishes decision making. So, it's almost like tricking my brain into being like, "Okay, "you have to have a healthy balance here, "otherwise you're not actually gonna make decisions "that are gonna fulfill you in the long run."

Chris: Yeah. We've had such a spectrum of guests on here, some are very compartmentalized and others, yeah, "this thing's bothering me at two a.m., "this thing's at four, "I'm up at five." Wow. If you can do that, I guess, but I can't do that either.

Kelly: I just don't think it's really sustainable.

Chris: No. I know. I know. Yeah. It's a different lifestyle for sure.

Kelly: Definitely.

Chris: So, one of the topics I wanted to ask you today about is your presentation at this year's Black Hat convention, which was titled Controlled Chaos, the Inevitable Marriage of DevOps and Security. So first of all, can you tell me something about the main points of the presentation?

Kelly: Definitely. Yeah, there are a few main points, but I think it goes back a little bit to what I said earlier, and one of the core points is that we're gonna keep having conflict and inefficiency if we don't unify responsibility and accountability. So for example, in DevOps, it used to be that the developers use to just throw all the performance issues over the wall to operations, operations would be super frustrated because they didn't build the systems, why did they have to clean it up? And, there was really little incentive for developers actually to build performance systems. So, developers were responsible, but they weren't held accountable. So I think, again, what's powerful about DevOps is this shared responsibility by design, which is important, and I argument in the talk is that security similarly needs to undergo a shift. I don't think you can keep having accountability and responsibility be siloed and reside among separate teams and organizations, 'cause it's going to lead to this conflict and inefficiency. So, security teams, I think most people don't wanna be stuck fixing issues in system they didn't build, right? And, developers are actually missing that feedback, when in order to build more secure systems, they don't see all of those security issues, so they don't realize what headaches they're causing all the time. So, to close that feedback loop, realign incentives, you really have to unify that and start having the developers be held accountable, and one of the ways that I at least am working on right now and flushing out a little further is an embedded SME, subject matter expert model, among security teams, but for this talk at Black Hat, another key point was the DIE model, which was pioneered by Sounil Yu, basically allowing for a way to align DevOps priorities to Infosec, and a core point of the talk, really, was that DevOps goals are that dissimilar from security, we're just not thinking about it in the right way. So, DIE stand for Distributed, Immutable, and Ephemeral, and that corresponds different qualities of infrastructure that actually have a lot of overlooked security benefits actually by design. So for example, and immutable system is one where you have a standardized image and that's used to build things. You don't actually manually configure or edit the resource in any way. So, what it means is that you can create a rule just allowing SSH access to systems, which obviously makes it way harder for attackers to conduct their operations, which is great from a security perspective, but a lot of security people overlook that when they hear about immutable infrastructure. So, part of the talk was born on the fact that I hear tons of security people panicking about microservices and other modern infrastructure, when really, a much better use of their time and what we tried to encourage in the talk is exploring how they can leverage this new infrastructure to enhance their security posture.

Chris: Okay. That's awesome. How would day to day operations change? Could we walk through a theoretical project of how's it done now and what are some of the steps that would change in this other model? At what point does it go away from the Dev team and go into this security SME model, and then kick back to them, and so forth?

Kelly: So, admittedly I'm still flushing this out, hopefully in book form that will come out next year. Knock on wood. But the idea is basically, so the status quo right now is often that developers build a project, maybe they'll invite security to some sort of high level review before they actually build the thing, but most of the time, they built it, they ask for a security review before it gets deployed, security finds all these vulnerabilities and prioritizes them for fixing. Sometimes it's gonna be fixed by the product team itself, sometimes it's just the security team. Often times, they'll argue over what's important or not, particularly if it's one week until release. They don't want to hold it back, and basically what we were proposing instead is that you have a security subject matter expert from the beginning, sitting at the very conception of the project, they're sitting in on design, helping figure out, poking holes in the architecture, figuring out, for example, the instance I gave of immutable infrastructure or even ephemeral systems. If you design the application to be stateless, you're actually removing quite a few vulnerability classes. Are you looking at things in that light? Are you thinking about attacker math? What's the easiest path for the attacker to get to their goal? All these little things that can really materially improve the inherent security that is created by the design, and having security sit in as prioritization decisions come up while it's being built, then obviously outlining these steps similar to the unit testing that you see needed before deploying anyway, figuring out what security tests needs to be in place, so that way, by the end, security's not actually the last minute gatekeeper. It's been

Chris: Yeah, they're not being surprised by what they're seeing.

Kelly: Exactly. Yeah. So, it's really more of a partnership model and it definitely involves rethinking the typical security organization and security team structure, but my personal take is that it's actually gonna reduce a lot of headaches for security people. It does mean they need to get smarter on the systems side and better understanding the products they're protecting, but it means, probably, they're not gonna be the ones fixing a bunch of stuff. They can sit back and just provide expert guidance.

Chris: Yeah, you're doing your work upfront and then you don't have sleepless nights at the very end there when we're trying to put the fires out.

Kelly: Precisely.

Chris: Okay. So, this was a phrase that I hooked onto in one of your presentations. Can you tell me more about the phrase, "chaos and resilience "is Infosec's future." What does that mean for cyber security professionals trying to break into the industry now, or in terms of setting their own career strategies?

Kelly: Yeah, so it's admittedly a bit of a provocative phrase. That wasn't my intention. So, I think in the future, like I was just saying, Infosec professionals really have to understand how systems, products, and the services that they're actually securing work. They have to have programming or some sort of development experience, which definitely isn't true from everything I've seen today. So, I think it's much easier to find bugs in systems than to build secure systems from the ground up. It's something that a lot of the top vulnerability researchers that I knew tend to agree with. So, even though I'm not one myself, I feel comfortable stating that. So again, being able to serve as a subject matter expert to a product team, helping them determine where security should be embedded from the design phase through to the actually delivery phase to end users, I think that's gonna be invaluable. So, that kind of blend of both programming and enough security expertise I think will be critical. And, chaos and resilience in particular, part of resilience in particular, why I emphasize that is I think we need to start thinking of security as something that a system does, rather than something that a system has. That's something I brought up in my Hacktivity talk. I think resilience is really about that. You can't just reach an end state of resilience. You can't just check the box on resilience. It's this ongoing ability for systems to be robust, but also adapt and transform as the context of the systems around them change. So, that's why I think resilience is really important because it's not just about security. It includes security, but it's also about performance. And, the chaos side is really about, and this is true for resilience, as well, it's about embracing that failure is inevitable. It's not only an option, but it's very likely to happen, and I think this philosophy really changes how you think through problems. Instead of attempting to establish perfect prevention, like we can stop all the threats, or creating this bible of policies that are trying to regulate all user behavior possible, I think instead you're moving towards, "okay, "how can we continually test our readiness? "How can we ensure that we're responding "appropriately so that it's--" Maybe there's an outage for a few seconds, but we're recovering very gracefully, 'cause that's ultimately what matters.

Chris: Okay. In terms of people who are learning the trade and so forth, is this something where we can look at studying different things, learning more problem solving, or putting your focus on certain other aspects of the art of security, or whatever like that? Is there something we can take from this notion of chaos and resilience and apply it to people who are just getting in on the ground floor?

Kelly: Definitely. I think a big part of it, obviously paying attention to tools like Chaos Monkey by Netflix, and there are a few others. Looking at chaos engineering as a discipline I think will be invaluable. I think also, a lot of it is really about mindset. It's about thinking about the paths to failure that you do know and how you can start injecting that in order to test it, which is where the programming expertise comes in, which isn't always something required among people studying security. So, I think honestly, also looking to other disciplines. For example, resilience matters a lot when you're building buildings. How can it withstand an earthquake? Buildings are meant to shake a little bit, right?

Chris: Right.

Kelly: So, not confining yourself solely to tech I think is also good because there's a lot we can learn from other disciplines, too.

Chris:  That's awesome. So, another interesting topic that I've seen you speak on, and you mentioned it briefly and I wanna get more into it, is the concept of cognitive biases behind security decision making. So, what are some of the logical issues that people make in security strategy, that they're making incorrectly, and what are you prescribing as an alternative?

Kelly: Yeah. So, I'd say-- Gosh, I could talk about this probably for the whole hour to go through them all. One I've been trying to highlight a lot recently, 'cause I've definitely gotten pushback on things like the Black Hat talk where people are kind of set in their ways, which is status quo bias, so it's basically the way that we've always done it is the better way, just 'cause it's the way we've always done it.

Chris: Yeah, it worked fine. Yeah.

Kelly: Right. Sometimes there is evidence for that, but a lot of times, there's not.

Chris: Yeah.

Kelly: So what's interesting is, empirically speaking, that deviating from the status quo can actually feel like some sort of loss to people. It feels like they're losing something, and I think that's very true for security from what I've seen. If you created some sort of policy implemented tool, it can almost feel like a failure when you move away from it. Maybe my strategy wasn't good enough or something, and it definitely creates a lot of fear, even if it's the best course of action. So, I think status quo bias definitely pervades a lot. Generally speaking, I think humans are bad at evaluating options neutrally and quote unquote rationally. So, we're more likely to gravitate towards what we already know because that helps it be a little more comfortable, and in general, there's the concept of choice fatigue where we have too many choices. It's kinda like a really extensive menu you sometimes see at diners where it's just like, "oh, my god. "I have no idea what I can possibly get."

Chris: I don't want anything.

Kelly: Yeah. yeah, so it's just easier to go with what we already know. I think there are a few ways to counter it. One I think is talking about value proposition is really important, but removing labels. So, one could present a shift to micro services or APIs as having the perks of easier diagnostics, removing numerous vulnerability classes, removing the ability for attackers to use automated attack tools sometimes, tangible best practices, and then things like standardizations, benefits. Then, monoliths in contrast maybe have a pro-- Okay, it's a single resource. Maybe it's easier to track. There's more direct control that the security team can have, but my guess is without labels attached, most security teams will be like, "huh. "This first one sounds a lot more interesting," even though if they hear the words microservices and API, they panic. It's like, "oh, my god. "I don't really know what that means." So, I think it's important to look at the pros, cons, and try to remove the words that can cause a little trepidation among the team. But, one thing that I've seen in some behavioral economics research is the idea of the reversal test, as well as the double reversal test.

Chris:  Okay.

Kelly: Which is basically the idea that you take-- It's almost like taking it to its logical extreme. So, let's say the security team currently disallows containers or something like that, and then someone proposes, "hey, developers should maybe use containers "because it allows for greater standardization, "you can do things like killing resources, "it has the benefits of being immutable infrastructure," I outlined earlier. So, it's just easier to catch bugs and implement changes and respond to incidents more quickly, but if the security team pushes back, and this is definitely something I've heard them do, the question then is to ask is then would less standardization be better? The argument is pro-standardization, then you ask the reverse of that. It's like, "okay, "if you're against more standardization, "then clearly you think that we should have less standardization," but they're probably gonna argue that less standardization isn't better. So then, you really have to needle them on, "okay, then why are we settling "in this sweet spot of standardization? Something probably isn't right. And, the double reversal test is just basically taking this even further, which can be kinda fun though. Obviously, you can sound a bit like a jerk to a certain extent. To basically set up the scenario, like changes inevitable and you just have to go forward, and the idea is that would you intervene to go back to how thing are today? So for example, like PCI, suddenly for compliance you are required to move everything to a public cloud. Obviously, that's gonna be a headache for a lot of organizations, but let's say they do it over a few years, security teams figure it out. But then, a few years later the PCI removes that requirement to be able to meet compliance. So then, the question is should the organization migrate all of their stuff from the public cloud back to where it is today? My guess is most of the time the security team will be like, "no, "that doesn't make sense." In which case it's probably then status quo bias if they're resisting moving to it in the first place.

Chris: Interesting. Okay. So, are there ways that these kind of biases might effect strategy at different job levels from security analyst up to CISO? Obviously, we're talking about management level things, but do people monitoring vulnerabilities and things like that, are they falling into the same traps? I suppose with the status quo thing of this has always works, I just gotta go in and do my job, and stuff like that.

Kelly: Yeah, I definitely you see it at different levels. So, one thing I've seen with CISOs is it's not necessarily a local status quo to the organization, but they have notions of what's worked for them at other organizations, so obviously, that's exactly what should be pursued at the new organization, because they get to set the strategy. More at the security engineer level, I think it's more, "here's how I do my practices. "Here's how I think "best practices work. "I think we should use this phone scanner." A lot of times it's more the tools they're used to using just 'cause they deal a lot less with the strategy level and it's more at the tactical level. So, I'd argue with every human on earth succumbs to status quo bias, unless it's a conscious effort to move away from it. So, I think yeah, there are tons of different ways it manifests within security organizations.

Chris: Are there ways of evangelizing the breaking out of status quo bias if you're in a lower level in the position and you see your bosses being fine with, "well, it worked last year. "It will work again this year," kind of thing?

Kelly: That's a good question. I think it's gonna depend on the organization and that comes down to more culture 'cause sometimes it's pretty frowned upon for more junior people to talk back to the managerial level people, but I think the reversal test, like I mentioned, if you are able to be in brainstorming and strategy meetings, I think those are worthwhile ones to bring up and see how the group discusses it, and then maybe you can gently point out, "it seems something's not right here," but it's gonna be hard if you're not the one setting strategy. Put it this way. People tend not to like being reminded that they're succumbing to biases and that maybe they're thinking a little irrationally. So, it's definitely culturally tough to navigate.

Chris: Okay. So, I wanna jump sideways here. I think we're probably gonna be talking on some similar level, but in your presentation at Hacktivity, which was titled To Ere is Human, the Complexity of Security Failure, you talked about the concept of error, as well as highlight and outcome biases constraining our thinking about security. Is this related in the sense of we're trying to get away from the notion of humans are the biggest liability mindset that's sort of common in the industry in favor of errors are gonna happen, so what do we do next? So, I know there's a lot of prescriptions and suggestions in the presentation, but can you give me an overview of some of the insights and recommendations, how this--

Kelly: Yeah, definitely. I have a ton of recommendations, and obviously definitely recommend people check out the talk, and hopefully the video will be available soon, but I'll definitely try to summarize briefly. So, I think there are three key things, and it's how I roughly structured the talk, that you should do to get of the humans bad sort of mindset. So, the first is thinking in a systems perspective. Like I mentioned, thinking about security as something a system does, not something a system has. So, it means that you have to start anticipating failure and recognizing that it's gonna stem from really, the inter-relation of components. No system is just like a linear set of components that trigger each other one after another. It's this spaghetti-ish mess, and includes things like economic incentives and cultural norms in the organization, and again, really not the underlying components themselves. You have to start thinking more along the lines of, "okay, "complex systems are these continually moving parts." We can't just blame one thing and call it a day. The second thing is prioritizing Security UX, so user experience. I think you really have to start putting yourself in the shoes of your colleagues and starting to empathize with their workflows. You have to consider the goals and constraints they have because their job is not just talking about security 100% of the time. So, the example I give in the Hacktivity talk is like an accountant that we've heard a lot about. Is this email compromised? You have to recognize that they probably have some sort of key performance indicator minimizing transaction time or something that makes them have to hurry through their job, and so having some empathy with that can form how you start designing your policies a bit better. I think in general, you always need to be thinking about this when you design policies, procedures, tests, all of it. You have to ensure that the easy way is the secure way because otherwise I don't think humans will do it. In any organization, I can't really think of many organizations where they're like, "yeah, don't worry. "Just do this whenever. "No pressure." We work under time constraints and have hit some sort of outcomes we have to hit, so you have to be keeping this in mind and not just imposing your rules as if security's the number one priority for everyone. And then, the final thing is a blameless culture. So, that doesn't mean absolving people of responsibility for their actions. That would be ridiculous. But it's really just making sure you're finding a balance between accountability and safety. So, there's one concept I think is really important, which is asking neutral practitioner questions. So, the idea here is that you take someone who's also an expert, let's say you take another accountant and you give them the context of the situation, like we had a client who was requesting $100,000 to be sent to this bank account overseas, and you had a backlog of 100 transactions to handle, blah, blah, blah, just going through the context and asking them what they would have done in that situation, and that's a lot more likely to get to the truth than us looking back and being like, "well, "clearly that accountant's stupid "for not recognizing that scam."

Chris: Right, or telling him, "why did you do this?" Yeah.

Kelly: Exactly. So, it really helps uncover what those competing priorities and goals were, and that's, to me, the only way you can really inform security strategies by understanding what those constraints will be so you can then counter them and help encourage safer behavior. So, yeah. It's systems, perspectives, Security UX, blameless culture, really all of those in tandem help improve your resiliency a lot.

Chris: Yeah. Do you have any thoughts on, there was that story down in Florida, that a college or something got hit by ransomware and the guy who should've been at the helm kinda let go. Do you feel like that was an appropriate response, or does that sort of fall into this where he did everything he could and they were just mad that the inevitable happened?

Kelly: Yeah, there are definitely some mediocre people. I can't speak to that guy in particular, but in general, firing the human is a very politically expedient thing to do, but it sounds to me like--

Chris: Let your board think that you're doing something.

Kelly: Exactly. But, are the conditions that led to it in the first place still there? Probably, yes.

Chris: Yeah. So, I wanna jump back to, there was a phrase that really jumped out at me. You said, "making it so that "the easiest way is also the safest way." Do you have any insights on that? That's such a clarifying concept. I really like that. It's one thing to add yet another layer of things that you have to put into your clock fire report or whatever, but to make it so that the thing that you're inevitably gonna do anyway is also the thing that keeps social engineering or phishing or other crazy things from getting through. Can you think of any case studies of how you would implement this in a real world kind of way?

Kelly: Yes. I gave some during the talk. The concept here is choice architecture, which is pretty well studied. So, as an example outside of security, the classic one is making four oh one K contributions default. There are a few people that actually opt out, which means that people are saving more and obviously not losing out on essentially free money. So, the use of defaults can be very powerful. So for example, there are some companies that now just require before you set up an account or can get access to it, you have to enable two factor authentification. That's one way you are adding a little bit of friction, but you're essentially inserting it into the account setup workflow, which generally feels less intrusive than going about it after the fact, things like single sign on are a great example of a security tool that made the secure way the easy way. So, most people are pretty terrible at managing all their accounts, so if you have one portal through which all of your colleagues and users can go and you have two factor on it, it's more convenient for the user because they have all of their apps in one place and then it's more secure for you because you can then manage all of the apps and credentials and stuff in one place. So, I think that's a great recent example of how both users and the security team can actually benefit if you think about Security UX a little bit.

Chris: Okay. Yeah. I think once you get that core concept into your head, there's probably all sorts of other places you can apply it to your organization.

Kelly: Definitely. The problem is though, it's a overwhelmingly-- security has not considered UX. I can't think of many talks at all at security conferences that have brought up the concept of UX.

Chris: All right

Kelly: I think it's just under explored in our domain.

Chris: All right, get on it, everybody.

Kelly: Yeah.

Chris: So, as we wrap up today, what are some of your predictions for the big security and DevOps concerns that'll emerge in 2020 and beyond? Do you think that these kind of biases will be better understood and overcome in the future?

Kelly: I sure hope so, but they certainly are still plenty rampant in other areas, even the behavioral finance research I've see, for example, like traders tend to perform worse on rainy days. It's just humans have these weird brains. What can you do about it? But, I definitely hope we see more behavioral economics, maybe even neural economics and given the element in security of fear, I imagine there's some sort of stuff going on in the amygdala or something that we could harness, but I think we definitely aren't there yet, but I think in general, it's good to better understand how we make our decisions in order to inform how we can make better decisions, right? The primary challenges, and this assumes that people accept my proposal that security shouldn't be in a silo, which is surprisingly controversial, it become more embedded in DevOps. To me, it's really about culture and process. I think there will be some resistance among dev and ops. They just naturally don't wanna accept accountability for security stuff. They don't really understand how the security works. They've probably had really bad experiences with their security teams in the past.

Chris:  Right.

Kelly: Even if not at their current company. Now on the Infosec side, I can definitely tell you there's resistance about no longer being the gatekeeper and no longer being the sole arbiter of control and allowing for releases and stuff. So, I think there's definitely that cultural friction that's in place, but it was the same thing with dev and ops, and it worked out. So, I would definitely hope for that. I still maintain the idea of an embedded SME, which is something that it's a similar movement happening for database administrators, so we're not the only ones. The idea of an embedded SME on a product or feature team I think makes the most sense for where the security organization goes today, but I do think it's a radical shift, so I'm not gonna pretend like it will be adopted overnight, but I know some companies that are starting to move towards that and I'm hoping to see more evolution along those lines, but I also think figuring out how to establish communication is gonna be a tricky and definitely non-trivial endeavor. I don't think security's a very good communicator, in general. I think we over-complicate things a lot, and I've had pretty great luck in explaining security concepts to dev and ops people and they tend to get it, it's just we have to slightly change our language and talk more in terms of risks that they're gonna understand, like jeopardizing up time, things like that. So, my concern is really that people will overly focus as well on technology, like DevSec Ops solutions rather than putting in the hard work on culture and process, and really stick to those buzz words and trying to transform. But, I think until you really establish this unified collaborative culture with a strong communication flow, process integration, for example, integrating security tests along with your unit tests, I really don't think you're gonna see a lot of process, like tools won't just solve it, it'll just create even more silos. My hunch is actually that you'll streamline the number of tools you'll need, 'cause for example, if you need security telemetry to monitor database access or something, probably your colleagues on the engineering side are already collecting that telemetry 'cause they need it for performance monitoring. So, I think actually if you start working together more, you can reduce the amount of spend and reduce the number of tools that you have just because you're starting to leverage the same telemetry and analysis and stuff like that. And, I also think that, if then we have DevOps more accountable for the security issues that they introduce, they're not gonna tolerate the horrible UX of most security tools. Yeah. I think they're used to a much better user experience, and so I think most security companies are gonna have to really step up their game if they wanna still compete in the same paradigm. That's obviously part of why I joined Capsule8, 'cause I think we can do that, but I think it's gonna be a challenge for most security tools looking forward.

Chris: Okay. Well, let's wrap up. On that point, let's talk a little bit about what Capsule8 does differently and how they've thrown their hat in the ring.

Kelly: Yeah, definitely. Like I said, Capsule8's designed to be ops friendly and not risk production performance, but provide the visibility protection people need, that includes whether on containers, virtual machines, you have a data center, public cloud, whatever flavor of Linux systems you have, we'll protect it, and we do so without clogging networks, using a lot of resources, and a kernel module, which that is like death now for any production system. So, we definitely have a lot of former Black Hats in the house, which means that we're pretty good at catching Black Hats and providing multiple vantage points into the attack lifecycle. But again, we're constantly thinking about, "okay, "what does the ops team need "and how can we help the security and ops team "work together on unified strategy "to keep production healthy?"

Chris: Okay. First of all, I'm imagining this script, the movie script where you are finding these Black Hats in flagrante and saying, "all right, join our team."

Kelly: Yeah, exactly.

Chris: So, if people wanna know more about Kelly Shortridge or Capsule8, where can they go online?

Kelly: Yeah, so my personal site is S-W-A-G-I-T-D-A dot com. That has all of my written and speaking content. Then, Capsule8. It's I definitely recommend checking out the work of our labs team, which has a bunch of deep dive technical content on things like Linux exploitation, which is pretty fun.

Chris: Awesome. Kelly, thank you so much for taking the time to join us today.

Kelly: Thank so much for having me.

Chris: Okay, and thank you all for listening and watching. If you enjoyed today's video, you can find many more on our YouTube page. Just go to and type in Cyber Work with Infosec. Check out our collection of tutorials, interviews, and past webinars. If you'd rather have us in your ears during your workday, all of our videos are also available as audio podcasts. Just search Cyber Work with Infosec in your favorite podcast catcher of choice. To see the current promotional offers available to listeners of this podcast, go to, as as we mentioned before, election security is a big focus for us going into 2020, so use our Free election security resources to educate coworkers and volunteers on the cyber security threats they might face during the election season. For more information about how to download your training packet, visit or click the link in the description. Thank you once again to Kelly Shortridge and thank you you all for watching and listening. We'll speak to you next week.

Join the cybersecurity workforce

Are you a cybersecurity beginner looking to transform your career? With our new Cybersecurity Foundations Immersive Boot Camp, you can be prepared for your first cybersecurity job in as little as 26 weeks.


Weekly career advice

Learn how to break into cybersecurity, build new skills and move up the career ladder. Each week on the Cyber Work Podcast, host Chris Sienko sits down with thought leaders from Booz Allen Hamilton, CompTIA, Google, IBM, Veracode and others to discuss the latest cybersecurity workforce trends.


Q&As with industry pros

Have a question about your cybersecurity career? Join our special Cyber Work Live episodes for a Q&A with industry leaders. Get your career questions answered, connect with other industry professionals and take your career to the next level.


Level up your skills

Hack your way to success with career tips from cybersecurity experts. Get concise, actionable advice in each episode — from acing your first certification exam to building a world-class enterprise cybersecurity culture.