Understanding developer behavior can augment DevSecOps

Today on Cyber Work, Nir Valtman, CEO and co-founder of Arnica, discusses developer behavior-based security. In short, there are lots of ways that backdoors or vulnerabilities can make their way into developer code. One door we can close on these intrusions is implementing processes that detect behavior anomalies in developers. Think of your bank monitoring for unusual purchases calling you to ask whether you really just spent $300 on a bobblehead from The Last of Us that’s shipping from Brazil. If you did, not judging, full speed ahead. If not, then we’ve got a problem on our hands. Valtman explains the benefits and the limitations of behavior-based security measures, as well as tips for developers-in-training.

0:00 - Developer behavior-based security

2:56 - Nir Valtman’s start in cybersecurity

4:40 - Moving into the developer world

8:20 - Working as a cybersecurity CEO

10:33 - A typical day for a cybersecurity CEO

19:30 - Monitoring product features

20:15 - DevSecOps behavior-based security

27:42 - Flagging irregular online purchases

30:35 - Impact of pre-fab code on behavior anomaly detection

33:28 - GitHub impact on developer behavior and security

38:09 - Ensuring you don’t skimp on sec in DevSecOps

42:35 - What should future developers know?

44:56 - Skills and experiences for budding developers

51:09 - What is Arnica?

54:57 - Outro

– Get your FREE cybersecurity training resources: https://www.infosecinstitute.com/free

– View Cyber Work Podcast transcripts and additional episodes: https://www.infosecinstitute.com/podcast

[0:00:00] Chris Sienko: Is Cinderella a social engineer? That terrifying monster trying to break into the office, or did he just forget his badge again? Find out with Work Bytes, a new security awareness training series from InfoSec. This series features a colorful array of fantastical characters, including vampires, pirates, aliens, and zombies as they interact in the workplace and encounter today's most common cybersecurity threats.

InfoSec created Work Bytes to help organizations empower employees by delivering short, entertaining, and impactful training to teach them how to recognize and keep the company secure from cyber threats. Compelling stories and likable characters mean that the lessons will stick. Go to infosecinstitute.com/free to learn more about the series and explore a number of other free cybersecurity training resources we assembled for Cyber Work listeners, just like you. Again, go to infosecinstitute.com/free and grab all of your free cybersecurity training and resources today.

[0:01:00] CS: Today on cyber work, Nir Valtman, CEO and Co-Founder of Arnica, joins me to discuss developer behavior-based security. To put it shortly, there's lots and lots of ways that backdoors, or vulnerabilities can make it into developer code. One door that we can close on these intrusions is implementing processes that detect behavior anomalies and developers. Think of it like your bank monitoring for unusual purchases, calling you to ask whether you really did just spend $300 on a bobblehead from The Last of Us that's shipping from Brazil. If you did, not judging, full speed ahead. If not, we've got a problem on our hands. Nir explains the benefits and the limitations of behavior-based security measures, as well as tips for developers in training today on Cyber Work.

[0:01:51] CS: Welcome to this week's episode of the Cyber Work with InfoSec Podcast. Each week, we talk with a different industry thought leader about cybersecurity trends, the way those trends affect the work of InfoSec professionals while offering tips for breaking in, or moving up the ladder in the cybersecurity industry. Nir Valtman, CEO and Founder of Arnica, is an experienced information security leader with over 13 years of experience in the space. Nir is a frequent public speaker at leading conferences globally, including Black Hat, Defcon, BSIBES, and RSA. Previous to Arnica, Nir was VP Data Security at Finastra and was on the advisory board of Salt Security.

If you've been following the show week-to-week, you will notice that we have a lot of DevSecOps, a lot of DevOps, DevSecOps happening right around now. It seems like, we're going to have a nice little a book of knowledge here at the end of these. I'm very, very excited, because it is a fascinating topic. Nir, thank you so much for joining me today. Welcome to Cyber Work.

[0:02:52] Nir Valtman: Thanks, Chris. My pleasure being here.

[0:02:55] CS: Great. Can we start with your origin story? How did you first get interested in computers and tech? Because I see from your experiences that during your time in the years on the Air Force, you were a manager and trainer of infrastructure courses. Now, had you been working with this tech at an even younger age as well?

[0:03:12] NV: Yeah, yeah. Interestingly enough, when I was 13, my parents sent me to a course of Visual Basic.

[0:03:18] CS: Oh, wow. Okay.

[0:03:19] NV: Yeah. I mean, it was a nice course to learn. The thing is that I didn't really like to develop. One of those classes that I had had just decided to write a small script that deletes a few file system files, and then they kicked me out of the course, which was, “Oh, that's awesome. I like that.”

[0:03:41] CS: Sure. Yeah.

[0:03:43] NV: It wasn't hacking, but it was just – I didn't enjoy the course that much.

[0:03:48] CS: Right. Right. You kept your seat.

[0:03:50] NV: In the Air Force, I wasn't a developer per se and I wasn't in 8,200, like many of the Israelis. In my case, yes, I did the tech stuff. Back then we called it IT and security. Today, you can call it, in some cases, the things that we did DevOps. It's just some automation, some maintenance to servers and such. Yeah, I pretty much get some of those expertise in the army.

[0:04:21] CS: Yeah. You already had a flavor for it, or a taste for it before then. When you did your time, you, you said, “Yeah, I have this tech background and this tech acuity.” And so, that moves into the instructor portion of it?

[0:04:34] NV: Yes, definitely. It was natural.

[0:04:38] CS: Very nice. Moving further onto that, I like to take temperature of our guest's career journey by looking through their LinkedIn profiles. I think yours is interesting in part, because it's almost like the textbook progression in terms of building on skills and abilities towards a natural conclusion. After the aforementioned infrastructure course trainer, you moved into the information security consultancy, then you were a senior technical consultant, then security architect, up to chief security officer, CSO, and then a long stretch at MCR Corporation as a security architect, and then chief information security officer. It's like that perfect, steady upward line in terms of role and responsibilities, you do in the guts of security, and then the management of security teams, and then the architect of the security plans, and then finally, the head of planning for all things security.

I just mentioned that, because from there, you moved into the role of director, head of application security at that job before co-founding Arnica. Since the AppSec director role stands out and dovetails into our discussion today, especially since you mentioned you weren't really into developer stuff back in the day, can you tell me about how this procession of roles and tasks led you from an overall security architecture and big C-suite decisions into the specific world of developers in AppSec?

[0:05:57] NV: Yeah. I mean, the thing is that I didn't really pick management. I picked leadership. In some cases, when you're in leadership, you get into the management positions naturally.

[0:06:11] CS: Well, can you, just before we go too far, can you make the distinction between your definitions of management versus leadership?

[0:06:19] NV: Yeah. I mean, management is typically being a people manager, direct manager for people, like HR manager.

[0:06:25] CS: People directly report to you. Right.

[0:06:27] NV: Exactly. What I picked is leading people, leading changes. That's essentially what I did. Through consultancy, through security architectural roles, that's the thing that I really like. I like to make an impact. I like to make a change. At some point, when we had to grow the practice, either a consultancy, or had to grow the practice within the different roles that had in the companies, it seemed to be a natural fit to have also people management beyond that leadership. Everything boils down to what type of an impact can you do, with or without people.

[0:07:07] CS: Was the AppSec piece of it, was that – did that seem like a left turn to you at all? Or was that still part of what you were doing when you were working with architecture and so forth?

[0:07:18] NV: It was still part of the things that I've been doing. Thing is that, when I started developing Visual Basic back then, it was pretty natural to write some simple pieces of software. When I was in IT, it was fairly simple to write VB scripts, which was also scripting, not exactly coding, but still doing some automation. At that time, when I was in consulting, I also did my bachelor's degree in computer science, which obviously, got me additional skills through algorithms and such. That's the thing that was a bit more natural for me to take more AppSec roles, rather than maybe IT security roles. Still, I'm writing code till today. I'm just making sure that I'm not rusty in those things.

[0:08:05] CS: You do keep a handle on the actual hands-on work of it.

[0:08:10] NV: Yeah. I'm maintaining our open source at Arnica. Don't judge my code, but judge the results of it.

[0:08:17] CS: I couldn't possibly. How does the work of CEO of your own company differ from the types of jobs you did before, especially within the AppSec, or DevSec space? What are some of your most common tasks now on a week-by-week basis? How much is the aforementioned leadership, and how much is the hands-on, and how much is the overseeing of what other people are doing?

[0:08:44] NV: I mean, it's completely different being a vendor versus being on the buyer side. You have a completely different job routine. First of all, when you work as a leader in a company, where you need to buy certain products, you need to explain to your management what is the justification for the tools. You need to go to a Gartner, look at the right trends. When you buy a software, you need to explain to procurement why you're buying that software. You need to build an entire process around how you deploy a software, and how do you essentially, you rationalize the products and the processes that you're trying to embrace in the company.

When you're a vendor, the explanation goes completely to the different direction. You need to have the messaging to procurement, of course, but it's not a messaging that you're used to. It's not a messaging that – I mean, every company works differently. Therefore, what you need to convey is more of the value and the simplicity of what you do to the buyer. The buyer then needs to go and build that case to ensure that your product is actually being acquired. It's completely different angles at the same space. At the end of the day, when you're trying to lead with a specific thought, within a company, it's in a different scale than if you're doing that for the market.

Even a Fortune 500, like NCR, you're trying to build a new process. Okay, so it's limited to maximum of maybe 40,000 people that you can impact. When you're a vendor, you can impact millions.

[0:10:34] CS: Yes, for sure. From a practical perspective, can you walk me through either a day, or a week of your current role? What does your morning look like? What's your afternoon look like? What are the stick points? What are the points where the to-do list goes up in flames and so forth?

[0:10:52] NV: In this role, or the previous roles?

[0:10:53] CS: This role. Yeah, your current role.

[0:10:55] NV: Yeah. A few things. First of all, I am very obsessed about customer success. I'm very obsessed about feedback. In some cases, it's easy just to hop on the call, talk with someone, get some feedback about the product, things that are working well, things that you can improve. That's great. I have quite a lot of those.

As working with people, you will not get a 100% of the time an honest feedback. You will get the things that they will want to tell you for you to be happy. Therefore, we also have certain analytics tools that we deployed within the product that help me better understand which features are being used, where we need to double down. Maybe we have frustrated users. How do I identify those? We have both aggregative view of, for example, which features are being used, and more individual views in which, instead of going and watching Netflix, I will just go and watch full story sessions and see how users are actually using the product.

[0:12:09] CS: Interesting. Yeah.

[0:12:11] NV: That's a typical day that touches the product spectrum that I'm handling. Obviously, I'm also looking for interesting partnerships, interesting areas where we can have an impact. If it's something that I see that is going on in the market, I will write a blog post. I will encourage someone else to write a blog post. I work very closely with the developers, with the customers. We have even customers that are not paying anything, because they're freemium customers, we have Slack channels with them, and they get us feedback, because they want their freemium to be better.

[0:12:49] CS: Yes. Right, right.

[0:12:52] NV: We're happy to do that. At the end of the day, freemium is freemium, but it's bidirectional. You also give me free advice.

[0:13:00] CS: Yeah. That makes perfect sense, because the freemium is never just like, I'm going to take this free thing and then leave. If you're going to use it, there's a pretty good chance that you’re going to – you want to have a reason to keep using it. I suppose, you're going to talk to whoever is in charge of making it better. Yeah, that makes sense.

I just also want to shout out how clearly excited you are about your own products that you're watching people using the products during downtime, and so forth like that. I know that not every job requires that much extra-curricular love for the product, but I have to imagine that comes with the fact that why you've made it to this position and CEO of this company and so forth.

[0:13:56] NV: That's how it looks to me. Every company needs to have very specific competencies. You need to have a strategy, a product, sales, marketing, pretty much engineering. The thing is that, because we're three co-founders, we are in a place where each one of us complete each other. For example, I like the product. I like to build the customer trust. I want to make sure that people love the product and we get that thought leadership.

On the different sides of my co-founders, I have Diko, which is our COO and he runs sales, marketing, customer success and such. Because at the end of the day, someone needs to operationalize everything. That fits well, because one, I don't have the time for that. Two, that's not my forte. I mean, I'm not the guy that, of course, everyone are in sales, but I'm not the guy that is a professional seller.

[0:14:54] CS: Yeah, of course. Yeah.

[0:14:57] NV: Then the last guy, which is my CTO, his name is Eran – I mean, I'm considering myself as a quite technical guy, but Eran is on a completely different level.

[0:15:09] CS: He knows it inside and out and backwards and forwards.

[0:15:11] NV: Exactly. This is why it's so phenomenal, because he writes code. He runs our data science team, DevOps, the engineering. Eventually, he just makes sure that everything that we do is built the right way and scale. This is the reason why we had only maybe one downtime for a very short time since we launched. It's a phenomenal up time and obviously, the technology needs to scale, which is what we have.

[0:15:46] CS: Right. I’m sorry. We'll get to your topic today.

[0:15:50] NV: Yeah, we’ll get there.

[0:15:50] CS: I'm interested in just one more thing. Because again, I'm trying to visualize your work. I know that a good portion of your work with the product, like you said, you're watching people use the product, you're listening to vendor feedback and freemium feedback and so forth. Can you talk about the ways that you take in all this information and then filter it to your team, so that they can make these changes and so forth? I mean, it's such a different view of what I think of as a standard CEO of a company, is that you seem to be almost basically, rewriting the instruction manual and then saying, put this into effect.

[0:16:35] NV: Yeah. I mean, at the end of the day, we also have a product manager in the company that gets a lot of my feedback. Everything that we do with analytics tools is also reviewed by marketing, customer success, product and myself, which means that all of us have different angles at the same things. Based on those angles, we know where we need to double down or not.

Now, the thing is that sometimes we just put small comments on full story. You can put a comment, we get it on Slack and people start interacting on that, until we make a decision. Then decision means we have a ticket for it. We have an issue on GitHub. That's one way. The other way to think about how we also make decisions is that in some cases, we interact with Hacker News, or with Reddit. Initially, we created the memes, fun memes about the topics that we're solving. To get the right answer, you just need to post the wrong answer. People will start commenting on things.

[0:17:37] CS: They're going to give you their insight, yeah.

[0:17:39] NV: Exactly.

[0:17:40] CS: They're going to push up their glasses and say, “Well, actually.” Yeah.

[0:17:46] NV: We're fans of this approach. It's either a blog post that goes completely to the direction that says, “Oh, this is very opinionated. What do you think?” Testing out the water, right? Or memes, or other directions. Yes, of course, we've got certain posts that we've got dozens of thousands of upvotes there. The bottom line is that the upvotes are not that important as the comments. We're reading those comments, getting the confirmation, the direction that we're thinking about is the right direction. Then we're combining that data with some of the analytics that we see and make a decision.

[0:18:30] CS: Interesting.

[0:18:30] NV: I got to tell you, though, that a lot of that is a leap of faith. Because, I mean, you can talk to customers, but then how much would that scale? We will talk with five, 50, a 100. Fine, you got it. But it's not big enough data set.

[0:18:50] CS: It's not a representative sample per se. It's a sample of people who want to open up a chat bar and talk to you. That's a different thing, than people who just want to get the work done. Yeah, that's interesting.

[0:19:01] NV: Exactly. Not all companies have the same maturity.

[0:19:03] CS: Yeah. Well, the counterpoint to that, though, is that analytics research is very much narrative building, but it's also narrative building with a lot of faith in it. To combine the analytics with ear to the ground, thinking about what people are telling you, it almost feels like it's giving a little less murkiness to the idea of just looking at the raw data.

[0:19:28] NV: Correct. Which is why when we're working on a feature, we also like to introduce those features to certain customers. Can be also premium customers. Then when we're asking, “Hey, would that be interesting for you to look at that?” We can get yeses and nos, but the beautiful thing in our case is that the product is built on feature flags. If there is a customer that expresses an interest to look at a specific feature, then we can just enable that feature for that particular tenant, even if it's an alpha version, and let them test it out and get some feedback. As we get more feedback, we mature the feature and release it officially. We iterate on that leap of faith as well.

[0:20:12] CS: Interesting. Okay, well, thank you for allowing me to wind off into the forest a little bit there. That was very interesting. That was exactly what I was looking for, so thank you. Talking about our topic today, one of the things I've noticed about people in the DevOps space, and for our purpose, DevSecOps specifically is that you all are definitely keeping your ear to the ground, and you're always building on each other's ideas. I say that, because we had a DevSecOps guest a few episodes ago, and then another one, Yossi Appleboum, who heard the first one and wanted to add to the conversation. Now, we have Nir Valtman that has some further extensions to some of the topics we discussed before.

At this point, I'd like to recommend these episodes with our past guests. I think at this point, we will now have a DevSecOps trilogy of sorts. A little spoiler alert, we got another one coming up next week here on another aspect. Yeah, Yossi specifically discussed asset visibility and vulnerability last time. You reached out to me to discuss what you call behavior-based security as an extension topic. I'll let you explain it in full. But based on my limited knowledge, I get the sense that we're talking about the concept of introducing security into the development pipeline in a way that keeps both the finished product and the work environment safe from attacks and breaches, but also not sacrificing the flow that developers need to be in to progress their work and finish tasks on deadlines. Can you talk a little more about this concept of behavior-based security and specifically, talk about what's considered normal developer behavior and how we determine that idea of normal?

[0:21:49] NV: Yeah. The way that we're looking at essentially, security in general and behavior-based security is that you have three aspects that you need to protect. You need to protect a developer. It can be the developer that uses your corporate resources, or also the developer account, like GitHub. You can use your account in the corporate or outside of the corporate. That you need to protect the developer, you need to protect the source code, which is whatever your source code management as GitHub, Azure DevOps, GitLab, Bitbucket. The last thing that you need to protect is also the product itself as it's being developed.

The approaches to protect each one of them are completely different with one thing that is common. There's something that needs to come up with a history of behavior. I'll give you a couple of examples.

[0:22:38] CS: Please.

[0:22:40] NV: A developer account can be compromised. It's not uncommon and as a matter of fact, I don't know if you saw, there is on September last year, GitHub published an alert that weren't developers, essentially, about account takeovers with phishing attacks. Not only that, developer tokens are sometimes tend to find themselves in source code repos, like as a token to authenticate. It's not uncommon to see developer account takeovers. The way to protect developer account takeover is essentially, by looking at the historical behavior of each developer. By saying historical behavior, it can be based on the audit trail that you have, on commits, on pull requests. The more data you have, the easier it is to make a decision.

I'll give you a couple of examples. Maybe a developer that clones 30 repos. Sorry, maybe three repos in a sprint, maybe three services that the developer works on is okay. Then the same developer now clones 30 repos in an hour. It smells like social exfiltration, right? It smells like someone tries to steal the code. This is an example of identifying a normal behavior for the developer. If you look at the flip side, a build agent can definitely clone 30 repos in an hour, because that's part of the same profile. It's not about, if you're doing too many clones. It's about the profile that you build for that identity.

Another example for a normal behavior is let's say, a developer writes code. Let's say, someone committed a piece of code that seems to be malicious. Maybe introduced a backdoor, either maliciously, or inadvertently. In many cases, the way that you write code is pretty much the same. Of course, you have your metadata patterns, like days and hours of weeks where you commit code. For example, the way that you maybe your commit message may not be the same, or the code that you just wrote may not be the same as you typically write code.

Therefore, think about it like a signature that you put on paper. The capability to identify whether it's the right signature for the developer is very important to protect both the developer, the source code, and your product. By the way, that also happened with a PHP hack when someone planted a backdoor into the source code of PHP, and it passed the pull request. It was the moment before the code was merged, someone just caught it. That's just an example of a use case.

Another example, protecting your source code, behavior-based. Think about most of the breaches that you saw recently, even with source code as filtration. I mean, could you reduce that risk if you just minimize the permissions to list privileges? It's very common to have improper [inaudible 0:26:08] to a source code.

[0:26:13] CS: Oh, yeah. Again, the speed before safety thing.

[0:26:17] NV: It's what?

[0:26:19] CS: Again, the speed before safety kind of thing. It’s just like, leave it open, we can just grab it whenever we want it kind of thing.

[0:26:25] NV: Exactly. On the flip side, if you could find which permissions are being used and based on that, could reduce that to list privilege, you would reduce the risk and the blast radius of a potential attack. Plus, not all permissions are equal. Meaning, you may have the ability to write code to a specific repo, but you will not be able to write code to the specific branch that is being deployed to production. Do you have a risk? Maybe you don't have risk. That context with the historical behavior is really important to avoid false positives and annoying developers.

As a matter of fact, you probably want to have a mechanism to re-grant permissions whenever developers need those permissions back. Think about a use case where instead of developers asking a permission every whatever, every time they need access to a repo, you can say, if a developer asks a permission to a repo where he, or she had that permission in the last 90 days, automatically re-grant it. You can come up with a logic that makes things simpler, behavior-based.

[0:27:36] CS: Yeah. Okay. That's interesting. The things that are coming through that are anomalous, is just because – So is it like an unusually large amount of money is charged to your credit card from Great Britain and your credit card says, “Did you mean to spend a 100 pounds on a hardcover book?” I'm like, “Yes, I have weird taste. I'm sorry. Yes, please release it.” Is it like that, where it's detecting weird things that it has enough of a sense of the flow of the developer that it can tell when the developer is not doing what it's expecting it to do?

[0:28:14] NV: Exactly. As a matter of fact, that's pretty much how it works in our case. I'm a big fan of chat ops. The thing is that developers don't like to be shamed, and you don't want to put it all over, right? The best way to identify something like that, if it's an account takeover, you probably want to notify the developer. I ask, “Hey, Chris. Did you just push that piece of code?” It's very simple. If it's a yes, all you need to do is to teach the model that it's okay. If it's no, you need to kick off an incident.

[0:28:50] CS: That's it. Okay.

[0:28:51] NV: It's very clear what you need to do with that message. Of course, there's feature query of what happens if no one responds and such, but that's the general idea. On the flip side, if it's an insider threat, and someone plants a piece of code that doesn't seem to be their piece of code, but still seems to be risky, there is another workflow. You can say, in that case, I do want to notify someone else. I do want to either send it via chat ops to the relevant, maybe, owners of that product, or any other reviews that pull request. Or maybe –

[0:29:33] CS: Other team members, or whatever.

[0:29:35] NV: Other team members, because, I mean, security can probably look at that, but chances are that they won't have the right context.

[0:29:42] CS: Yeah. It's going to take too long for them to make an official judgment.

[0:29:45] NV: Exactly. The people that review code frequently, they will know the context. Even if they're not the security experts, they will be able to know if this is a backdoor or not.

[0:29:59] CS: Got you. Yup.

[0:30:01] NV: It's not a vulnerability. It's a logical check in many cases. Yes, you may also, if your culture says, let's put everything on a pull request, then you can say, if the developer didn't respond from the previous case, if the developer didn't respond on the chat, then put it in a pull request. It can be multiple steps that you add in the process –

[0:30:28] CS: Yeah. There’s fluidity to it.

[0:30:28] NV: - to have the right controls. Also, even the response is behavior-based.

[0:30:36] CS: Yeah. Okay. I want to move from that to – because I wanted to ask more about, you know, I like that part. I'm glad we talked about that, but I also wanted to talk about, I think you said there was the third part, which is the security within the code, versus the security of the writing of the code and so forth. In a blog post from Stack Overflow, the organization said that based on their own data that one out of every four users who visits the Stack Overflow question copies something within five minutes hitting the page. The blog emphasized that Stack Overflow is largely based around this concept, allowing yourself to learn by seeing successful code produced, get ramped up to working code faster and reduce frustrations.

However, they do mention that it's important to do some basic practices to “prevent bugs, or safety issues from sneaking into your code,” which might be a little like telling college students with a paper due in the morning to use Wikipedia as a starting point, but you better do your own research as well. In your experience, Nir, can you – can widespread use of devs copying free fab code lines from Stack Overflow impact behavior anomaly detection and security, in which security system is attuned to detect not just packets, but unusual anomalous events in the system?

[0:31:41] NV: Yeah. The thing is that it's not uncommon to copy and paste from Stack Overflow. As a matter of fact, when I looked for one of the JSON parsers, I saw that the top result on Stack Overflow was actually an insecure way to parse the JSON format. Which is, again, it's common. The thing is that if you are a developer that typically copies from Stack Overflow, the model for that developer will look like, you typically copy from Stack Overflow. If you use ChatGPT, the model will look like you always use ChatGPT.

The thing is that in many companies, they also have linters. Even if you copied something directly from Stack Overflow, you would typically have your own annotations on it. You will change it to the way that you like to write the code. Maybe tabs versus spaces, whatever, right? You will still have your own version to it. Eventually, if you have linter checks, you will also need to pass the linter checks.

At the end of the day, it may flag as anomalous if it doesn't adhere to the same behavior. Not everything is machine learning. In some cases, you need to take very deterministic checks and run them. For example, if that JSON parser is added, you can check if it's used securely. This is a very simple check that you can run. It's a string check. It's not anomalous, versus not anomalous. When you combine those two, then you get either, this is a vulnerable code, or this is another type of a risk that I need to weigh in on.

[0:33:28] CS: Got you. Got you. That makes more sense. I appreciate that. I had a recent guest, Jack Nichelson, no relation, talked with me about ChatGPT and what AI realistically can and cannot do now, or in the future with regard to automating processes, versus replicating thought. Similarly, Jacob the Priest of GitHub told us more about GitHub co-pilot, which is aimed at automating some processes as you navigate around GitHub's almost limited, limitless possibilities. That sparks something, because you mentioned GitHub co-pilot specifically. The optimistic talking point for these learning enhanced tools is always that they were already skilled developers whose skills are beyond out to do the great work that they already do, but faster. Thanks to having the necessary tools practically springing to their hands every time they need them.

Of course, we know, again, whether it's Wikipedia for college students, or ChatGPT being asked to write journalism, or even poetry, that high power tools like these are constantly looking to be exploited by people looking to cut corners in terms of thoroughness, or safety. Nir, can you speak more about how tools, like GitHub co-pilot and ChatGPT have impacted developer behavior and security?

[0:34:36] NV: Yeah. I mean, at the end of the day, there's a lot of repetitive tasks that developers have that essentially, GPT3, for example, or 3.5, look at four that is coming out soon, those models essentially help you to accelerate your work. Our developers also use co-pilot. They're pretty happy about it. The thing is that when I look at them, because I also care about how developers behave with their IDEs, when I look to them, I see that in many cases, they see the suggestion that said, make sense, but they just need to make additional changes. It's not taking everything and making it that easy. But it does a lot of that work that is quite significantly and maybe quite significantly repetitive. At the end of the day, you will still need to add your own tweaks.

[0:35:35] CS: That's the kicker, I guess.

[0:35:37] NV: Exactly. It does educate you a bit, if you think about it this way. Yes, maybe there is a use case. If you didn't think about the ChatGPT suggested you to check and nulls, whatever. That's great, because it can improve the quality of the code, but it also comes with a risk. The risk is that you need to know where the code is written and where it's sent. Because there's also privacy implications, or IP implications, that if you give the wrong company, the capability to send your code to ChatGPT, or any other company and they opt in to share your data all over, that is a risk. Meaning, your source code can actually be exfiltrated and someday it will appear on someone else's computer as a suggestion. Not even talking about secrets that may go out. Just hard-coded secrets.

[0:36:38] CS: Just your style, yeah. Bite your style.

[0:36:40] NV: Exactly. Which is why I speak with a lot of security practitioners and many of them asking those types of implications in case, they want to enable that productivity for the developers, because everyone sees the value. Just need to figure out what are the security risks that can be mitigated, or minimized in that case.

[0:37:04] CS: Yeah. I think, that's probably apparent and maybe it seems like I'm being overly pedantic about this. A lot of our audience and listenership are people either just getting into the space, or are entry level. One of the things we hear in other aspects of cybersecurity is always that we have a new candidate and they know how to use the tool, but they don't know why they're using the tool. Or, they know that you're supposed to do this process, but they don't know why they're supposed to do.

Again, I think, just the idea of having your own analysis in there and not just copy pasting, or relying on ChatGPT to tell you what you want, or what you need, or co-pilot. There always needs to be your own brain double checking it, and so forth. Because I think it's far too many times, why is this insecurity in here? It's like, well, I did the thing, I pasted it off of here and now all these other people were pasting it off of right here. Just something worth remembering. Not everyone is 10 years into their career and stuff. Yeah, always worth reiterating that.

[0:38:07] NV: Yeah, definitely.

[0:38:09] CS: Now that we've addressed some of the potential issues stemming from normal developer behavior, can we talk just about solutions? What recommendations do you have for developers and dev team members to make sure that these new ways of working aren't skipping on the sec part of DevSecOps as it works?

[0:38:24] NV: Yeah. I'd say, that there's a few things that’s worth implementing, either by developers, or by the development leadership teams. At the end of the day, if you have a capability to check for maybe even new vulnerabilities that you will introduce, that is a very easy step to take, as opposed to let's say, I'll add a scanner, whatever scanner it can be. It can be static code analysis, software composition analysis, secret scanning, whatever. Anomaly detection. At the end of the day, developers have their own tasks. They're measured by product deliverables. Product teams not necessarily prioritize security as a functional requirement. As a matter of fact, it's typically a non-functional requirement that gets deprioritized.

[0:39:13] CS: A lot of times, it's just seen as an obstacle, or not my jive, I suppose.

[0:39:18] NV: Exactly. What I've seen to be successful is that you can say, first, let me run all the scanners that I need to run, but let's enforce something that is called zero-new defects, or zero-new high-severative vulnerabilities. In that case, if you can run a scan, even in the context of the developer, let's say, a feature branch, before they even open a pull request, and provide them that feedback directly, in that case, you can get a great win, because the developers know that this is a new thing they introduced. They know that they're in the right context to fix it.

Think about an issue that was introduced two years ago. Maybe that developer is no longer in the company. This is one. The second thing is also making sure that the things that developers work on are not necessarily the – not everything has the same risk. You may have new findings coming up on a product that you're working on. But hey, maybe it's not that important product. Maybe it's only internal and the risk is of a static tool that runs scans is inaccurate. Also, if you're thinking about, at least the concept that I really like, chat ops, send a message directed to the developer, ask the feedback. The developer will likely say, either I'm on it, or it's not a real risk.

[0:40:50] CS: Okay. Yeah.

[0:40:53] NV: I mean, the developer will rarely say, “I don't have time for that stuff.” Right?

[0:41:01] CS: Yeah. Yeah. Yeah.

[0:41:04] NV: That's a way to think about it, okay. There are other areas where it's more of an incident response that can be implemented. That's something that touches more of the hard-coded secrets in code. In hard-coded secrets in code, it's again, it's not uncommon problem. As a matter of fact, almost all companies that onboarded into Arnica, have secrets in the code. Actually, I'll rephrase. Have high severity secrets in the code. What worth doing is also figuring out an approach to prevent those secrets from being introduced in the code.

There are multiple ways to do that. The key thing is that when a secret is introduced, or when you know that a secret is going to be pushed, find a way to mitigate that risk as quick as possible, either by blocking that push, which is not ideal, because user experience is being impacted. Developers don't know Git commands, or many developers don't know the Git commands. The other option is, let's assume the developer actually did push the secret, make sure that it's fixed within a few seconds. If you can run some automation around that, even better.

[0:42:24] CS: Great. Great. Love it. Now, we want to talk a little bit about, again, for people who are either new, or just thinking about getting into this area, for new developers and those considering the tools and technologies that'll be part of their daily workflow, I imagine will be progressing and changing at a fast rate. Do you have any advice for the future developers to make sure that they know their skill set inside and out and can use these time saving tools safely and responsibly?

[0:42:55] NV: Yeah. I think that there are a few things that they can do. First of all, developers can look for vulnerabilities in their own source code. There's a ton of open-source tools that can help them to identify vulnerabilities.

[0:43:09] CS: Got it. Okay.

[0:43:10] NV: It can be in software composition analysis and in static code analysis, secrets and such. They can run it by themselves on their laptops. If they really want to go to the next level, maybe they can implement, let's say, if it's an open source and the work on GitHub, for example, use GitHub actions and add those checks automatically, so every time that you push code, or every time that you open a pull request, you'll have those checks. This is on a developer side.

If you are in a corporate hat, you may implement those things a bit differently, maybe because you want a 100% coverage. Maybe you want different types of feedbacks. At the end of the day, developers can utilize open source. It's only recommended. With a caveat that in many cases, open-source software can also be risky. You can download a pipe package, or an NPM package that actually has a backdoor, that will go and steal your GitHub token and send it somewhere else. There's also the pros and cons of using open source, but there's also where you can run certain reputation checks. You can check the practices within each repo.

Again, there's a lot to do there. Simple things. Go to the repo, check the open SSF scorecard, which is a Google project that identifies which practices are implemented in each open-source repo. Or check whether that package is implemented. Sorry, check if that package is maintained, if that's going to be a risk for you. It can be a security risk, or an operational risk. These are a lot of things that you can do on your own laptop and run those checks. Probably, again, edit in pipelines.

[0:44:56] CS: That's what I was just going to say that ultimately, these should be standard stops on your journey then. Yeah.

[0:45:03] NV: Yes.

[0:45:03] CS: Yea. To go one level back, we're imagining fully fleshed out developers here. For students and people who are trying to get into the developer space, and especially the security aspects of the job, can you talk about some skills and experiences and projects and other indicators of competence that they should be doing and listing on their resume, say in college, or on their way to getting their first job that would help them stand out in the pile?

[0:45:27] NV: Yeah. I'd say, the basic stuff that the developers, or techies, at least, that want to get into security, I'd say, there are a few interesting projects that you should probably look at. There is all the StarsGoat Project. There’s a WebGoat, RailsGoat, there's a JuiceShop, which is also a great project that you can find certain vulnerabilities in it. You can scan the StarsGoat, you can do scans dynamically real-time. If you're interested in maybe repos security, which is not security of your product, it can also be the security of your own environment. You can look at GitGoat, which is that’s the piece that I wrote. Look at all of the goat projects, essentially. That's one recommendation to get more of the offensive side of the things.

Also, there's a nice project by OWASP, which is called ASVS, which is Application Security Validation Standard. That will help you understand better what are the checks that security are typically looking into. It's not only about the secure coding, it's more about like, how do you design a system securely? Where do you put the right authentication? How do you input validation conceptually? It's a fairly long document, but it's a really good read to learn more about security. By saying security, I mean application security. I'm not talking about security in general.

[0:47:09] CS: Now, to that end, I mean, in doing these Goat projects, is it a matter of and the documenting it afterwards that you can – if you're a newcomer, that you can show a potential employer that you've learned these things this way? Does that count as an experience?

[0:47:30] NV: It's an uncertified experience. Not something that you can and show off with it.

[0:47:38] CS: That's more like of the learning portion of it.

[0:47:40] NV: It's the real learning curve.

[0:47:41] CS: Okay. I guess, what I'm asking then is, are there projects that you could create yourself, create an action plan for a dev team, or something that you could show like, “I've thought about this. I have the skills to do this in a theoretical way, even if I don't have a company that's asking me to do it yet.” Do you have any advice for showing competence, even if you haven't entered the job space yet?

[0:48:08] NV: Yes. In that case, I would warmly recommend to look for open-source projects. I mean, assuming that you've already learned those skills. What you can do is, I mean, I know that at least on our side, when the CTO, the head of engineering, when they want to hire developers, they just look at their GitHub profile. They're trying to figure out what they contributed. Can be code, can be pull requests, can be issues that they created. That's a very equal resume to the one that you may have on LinkedIn. Even better, because it shows your competence.

[0:48:47] CS: Hands on. You can actually follow the trails.

[0:48:51] NV: Exactly. Which is, if you really want to get into this, go to open-source projects, even those that are not security projects, and find certain vulnerabilities that you're interested in. As a matter of fact, there is an open-source project called Semgrep, which scans code statically. It's so flexible that it allows you to write your own rules. You can write your own rules on what you believe that would be the next vulnerability, clone a bunch of repos that you're following, identify if there are any vulnerabilities that you can find that no one else found yet, and report on them, either as an issue, as a pull request. If you create a pull request, that's phenomenal. If you can script your way to do that, it's even better.

There's a guy that, his name is Jonathan. I forgot his last name. He's on the Kaminsky Foundation. He's pretty active in OpenSSF. There is a Slack channel for that. He actually built a talk on how we opened pull requests at scale for open-source projects, when he identified simple things like, do you use HTTP instead of HTTPS? Bam, a bunch of full requests. There are ways to build that at scale as an interesting project. I warmly recommend doing that. Not everyone will do that, but that's a way to do that. If you ask me this, versus certifications, go towards that direction. Certifications, you –

[0:50:30] CS: Can talk about what certs you especially think are beneficial in here?

[0:50:35] NV: If you want to be a bit more on the offensive side, try the OCSP. If you want to be more on the defensive side and work more with developers, I would say go for CSSLP by ISC2. I mean, I did the CSSLP. I think it provided me a bit more tweaks where I know that I had places to improve. I was experienced when I did it. I think it's still beneficial to do that, even if you have a few years of experience and want that certification.

[0:51:08] CS: Thanks. As we wrap up today, we discussed your job tests as CEO of Arnica. If you'd like to discuss your company more, especially types of services you provide, here's your chance to do so.

[0:51:21] NV: Thanks. As I said, we're protecting the developers, the source code and the product. A few things that are quite unique about Arnica is that I'm a big fan of showing risks and not hiding them behind a paywall, which is why showing all of the risks, source code risks, secrets, permissions, misconfigurations that you may have in the environment, all of that is what we built as a freemium for unlimited users, unlimited time.

Then the automation that I mentioned, some chat ops, Git ops, these are the things that are essentially paid. Put it aside, the real values just go ahead and look at their own projects. Start with that. The main differentiator that we have is that we're actually taking an approach that is what we call pipeline-less security approach. A pipeline-less security approach means that you don't really need to change any pipelines. I mean, you have a 100% coverage from day one as you just go ahead and integrate Arnica.

Not only that, we have a very specific logic that actually fixes problems. Let's say, a developer pushed a new secret, we have the mechanism to automatically rewrite that secret for the developer with a phenomenal user experience. That's the thing that is important.

[0:52:50] CS: Yeah, you mentioned user experience, because the CEO is making sure that you do.

[0:52:55] NV: Pretty much. Or, maybe the DevOps team, or security team wants to reduce your permissions to list privilege. Awesome. We have a self-service capability that allows you to get your permissions back. Again, then that piece of automation and the secrets, we have a couple of patents that we filed on that, because it's quite unique the way that we do that. Also, think about tracking traditional apptech tasks like, “Hey, there's a new static code vulnerability.” “That's great. We'll report it in a dashboard.” But then, the additional capabilities is because everything is pipeline-less and you don't need to change pipelines, we send that directly to the developer and say, “Hey, Chris. There's a new vulnerability we're going to introduce.” Remember, there are new high-severity vulnerabilities. “There's a new vulnerability we're going to introduce. You should probably fix it and then click on whatever you have on Teams, or Slack, whatever the button that makes sense to you.”

Then if you don't respond, we can put it in a comment on the pull request, or even annotate your code in the pull request and we’ll them, “This is actually – that's how you fix it.” While we review the pull request. Everything is around contributing to the developers, rather than blaming, or shaming them.

[0:54:12] CS: Or hamstringing them.

[0:54:14] NV: Exactly. That's the main differentiator and main value that we provide.

[0:54:21] CS: All right. One last question. If our listeners want to know more about Nir Valtman, or Arnica, where should they go online?

[0:54:27] NV: You can go to arnica.io. That's where our content is. Our blog is also there. Sometimes we post interesting stuff over LinkedIn. That's also worth following us there. That's mainly where you will find us.

[0:54:45] CS: Fantastic. Well, Nir, thank you for joining me today. I really enjoyed this next block of knowledge about, in our DevSecOps series that we seem to have found ourselves in.

[0:54:53] NV: Likewise, Chris. Thanks for your time.

[0:54:56] CS: Thank you to all of you who have been listening to and watching the Cyber Work Podcast on a massive scale. We are awfully glad to have you along for the ride. Before I go, I would like to invite you all to visit infosecinstitute.com/free to get a whole bunch of free stuff for Cyber Work listeners. Our new security awareness training series Work Bytes is, I love this thing, featuring a host of fantastical employees, including a zombie, a vampire, a princess, and a pirate, all making security mistakes and hopefully, learning from them. I've watched a bunch of these videos and they're fantastic.

Also, visit infosecinstitute.com/free for your free cybersecurity talent development eBook. It's got in-depth trading plans for the 12 most common roles, including SOC analyst, penetration tester, cloud security engineer, information risk analyst, privacy manager, secure coder, and more. Lots to see, lots to do. You got to go to infosecinstitute.com/free. Yes, the link is in the description below, too.

Thanks once again to Nir Valtman and arnica.io. Thank you all so much for watching and listening. Until then, we will see you next week. Take care now.

Free cybersecurity training resources!

Infosec recently developed 12 role-guided training plans — all backed by research into skills requested by employers and a panel of cybersecurity subject matter experts. Cyber Work listeners can get all 12 for free — plus free training courses and other resources.

placeholder

Weekly career advice

Learn how to break into cybersecurity, build new skills and move up the career ladder. Each week on the Cyber Work Podcast, host Chris Sienko sits down with thought leaders from Booz Allen Hamilton, CompTIA, Google, IBM, Veracode and others to discuss the latest cybersecurity workforce trends.

placeholder

Q&As with industry pros

Have a question about your cybersecurity career? Join our special Cyber Work Live episodes for a Q&A with industry leaders. Get your career questions answered, connect with other industry professionals and take your career to the next level.

placeholder

Level up your skills

Hack your way to success with career tips from cybersecurity experts. Get concise, actionable advice in each episode — from acing your first certification exam to building a world-class enterprise cybersecurity culture.