[00:00:00] Chris Sienko: Cyber Work listeners, I have important news before we dive into today’s episode. I want to make sure you all know that we have a lot more than weekly interviews about cybersecurity careers to offer you. You can actually learn cybersecurity for free on our InfoSec skills platform. If you go to infosecinstitute.com/free and create an account, you can start learning right now.
[00:00:58] CS: Today on Cyber Work, I welcome Moshe Zioni of Apiiro to talk about threat research and how to properly report discovered code vulnerabilities. We discuss the ways that vulnerabilities can find their way into code, despite your best intentions, the difference between full disclosure and responsible disclosure and what was like to be in the last generation to still grow up before the Internet changed everything. All this and plenty more today on Cyber Work.
[00:01:29] CS: Welcome to this week’s episode of the Cyber Work with InfoSec Podcast. Each week we talk with a different industry thought leader about cybersecurity trends, where those trends affect the work of infosec professionals, while offering tips for breaking in or moving up the ladder in the cybersecurity industry. Moshe Zioni has been researching security for over 20 years in multiple industries, specializing in penetration testing, detection algorithms and incident response. A constant contributor to the hacking community and has been a co-founder of the Shabbatcon Security Conference for the past six years.
Today’s topic is loosely based around the ethics of threat research and vulnerability reporting and how to do that in a way that makes you look like the hero and not the black hat. Moshe, thank you for joining me today. Welcome to Cyber Week.
[00:02:16] Moshe Zioni: Hi. Thank you, Chris, for having me.
[00:02:18] CS: Thank you. Thank you for being here. What got you interested in computers in tech originally? What was your origin story? Were you a computer kid? Did you come to it later in life?
[00:02:31] MZ: Pretty much so. A computer kid. I’m a 90s kid. I was born in ’83. Through the 90s, as you as you might know, or the listeners can now – the 90s were pretty hectic in terms of Internet. Once the Internet went on, everything changed. It was amazing to see from the sideline. and I want to join as an interest, as an intellectual interest. I was back then in junior high. This was my first steps into the hacker community.
[00:03:05] CS: You’re at that exact point where you still have a before and after a sense. You were familiar with life before the Internet and then you saw it take off and blow up. Yeah.
[00:03:17] MZ: Absolutely. Yeah.
[00:03:18] CS: That’s really cool. Yeah, I want to talk a little bit about some of your previous experience with some of my guests, looking at your LinkedIn profile, can reveal a series of pivots, or changes, or transformations between industry. With you, Moshe, an awful lot of your career journey has been laser-focused specifically on penetration testing and threat research. For listeners who are in the initial stages of being future pen testers, or threat researchers, can you speak about the different points at which you leveled up your skills in the realms of pen testing, some of the big projects or opportunities you grabbed a hold of and how these made you a better threat researcher?
[00:03:54] MZ: Wow. That’s a great question. It’s important to know that there were several shifts within the industry throughout the years. If I began my professional journey around 2005, since then, all the industry have been changing and shifting all the time. This also speaks to what kind of penetration testing jobs and infosec jobs in general were present to me and others as well. Luckily for me and the others in the industry, there was a place for our talent. This was the first initiation of those industries and everything. Because of that, it was shaped around that.
Skills were very different from each other. Everyone came with a different set of skills, because everyone had a different set of, I would say, experiments and something that they find themselves doing in 2 a.m. without noticing. A part of it, to your question, I think that the most important thing for me, especially was to go abroad. Not to be focused on a single technology, or a single tactic that you just use it as a hammer and everything looks like nails, but to go very, very broad. If it’s infrastructure, if it’s operating system, if it’s encryption, malware, insert response at times. By that, I think it gave me a lot of firsthand experience. Also, ever ignited my passion for security over again.
I wasn’t bored for a second. If I was bored for one specific case, I could put it to rest and start something else adjacent to that. I think that this is the most crucial thing in penetration testing is to go abroad, because if you are looking through a pinhole, you would find, you will be the best, and the sharpest knife in the drawer. Once you have something off your scope and something that you need to take advantage of, you won’t be even noticing that. I want to say, there is no place for sharpening a knife. Of course, there is.
At least have some perception to the other realms as well, so you’ll be able to connect the dots and to complement that with, of course, more research and more education towards that one state.
[00:06:15] CS: That lines up with something that I discussed with the previous guest, Gemma Moore is a pen tester in the in the UK. She was saying, for people who started this type of work in the late 90s, in early 2000, that you didn’t have these prepackaged pen testing tools. On one hand, a lot of that automated aspect of it makes it easier to go deeper and get more specialized. The thing that she really appreciate about this time, and it sounds like that’s the case with you as well, is that there were less tools, but you had to be better at them and you had to do a lot more lateral thinking. Does that line up with your experience of those days?
[00:06:49] MZ: Absolutely. She put it very, very nicely. Yeah.
[00:06:51] CS: As I mentioned, today’s topic, we’re going to talk specifically about code vulnerability, the proper process for emergency patching and the ethics of security research. Let’s start with some basic parameters. What are the types and some common examples of code vulnerabilities that we’re speaking about here? What are the fundamental errors, or problems that you like to discuss?
[00:07:16] MZ: Again, this is a very broad question. I would try to answer it generally. By that, let’s start with the basic building stones of every vulnerabilities are what kind, or what realms does those vulnerabilities exist in? Of course, if you are looking at encryption mechanisms, you have a different vulnerability types, than I would say, building a frontend for a website. Maybe open their backend. These are some variability, some variables that comes from whatever programing languages are you using. I won’t go into the details of why, but in general, different programing languages have different aspects of mechanisms for protection, and some of them are prone to those – to one type of exploitation and not the other, or maybe the less prone, I should say.
Some technologies inherently are devised by, as I said, encryption mechanisms, for example, those kind of exploitations are more on the side of – there are for example, side channel attacks for encryption, or maybe mathematical attacks for the encryption mechanism by definition. Maybe to answer your question and it’s a very hard one, I would say. There are some implementation vulnerabilities, like the way you coded the actual solution to your problem is wrong, and maybe you didn’t consider something memory-wise or view-wise, or maybe storage-wise, or permission wise. The other one is I would say, algorithmics. If there is a mathematical challenge there and you haven’t got to that, this is a different type of vulnerabilities overall.
[00:09:01] CS: I see. Now, is this the thing where implementing – we offer secure coding and so forth? Is this a failure in – Not a failure, but is this an error, or an issue in just working too quickly with the programing language and not like, dotting your I’s and crossing your T’s in terms of all of the safety on the backend. Where do you where do you see this coming from? Is this lack of experience? Is this just not being used to, oh, when you when you make this particular thing, make sure you close up this backend, or make sure you seal this this access hold or whatever?
[00:09:45] MZ: We can talk about the common problems with why non-secure code exists. In general, of course, bugs exist because people are writing code. Once you write a program, no matter how exposed you are, you are presenting bugs to the code base to do that. Why that security bugs are not different. Everyone will contribute some time in their lives, some security bugs into their programs, no matter what.
Now, why it’s happening, it really depends on, as you said, yourself on the expertise and security inclination of the person that is going to write that. Also, you can’t know at all. If there is something like – if the technology have been changing and you haven’t paid attention to a specific maybe a corner case that you should be aware of, or didn’t take something that is very obscure, but can happen sometimes, it’s very, very hard to tackle, just because it’s very hard to think about. Of course, as you said, the stress of time will also take effect. If you have two days to write something that should take you two days, it doesn’t mean that you are perfect. If you had maybe two weeks, it will maybe be better, but nonetheless.
[00:10:52] CS: Would be closer. Yeah.
[00:10:53] MZ: Yeah. But won’t be perfect, because that’s the way we are.
[00:10:58] CS: Right. That makes perfect sense. I mean, you only have to look at how many times you have to update this or that browser, or program, because they found yet another thing. Yeah, I mean, that’s just a part of life now. Once a vulnerability has been found in a piece of code, can you speak about the in-depth research process that follows? What’s the chain of inquiry that you’re creating to get to the bottom of something like this?
[00:11:23] MZ: I can speak of several vulnerabilities that we found throughout the career, or maybe the most recent one was a vulnerability that allowed us to read, I would say, passwords and secrets on it, on a system. Basically, what happened there is that – what intrigued us is two things. First of all, we understood the environment. We understood where those programs are living at. This is a very specific program named Algocity, which is part of the way that the code is being built today and modern code is being built by systems like Algocity. We understood the environment, where the code lives, what kind of parameters are getting in, what is the danger, what are the most precious jewels that you need to protect from?
By that, we started to look at the code base and to understand what a code base means to do, and what is the meaning behind it, what is the implementation, maybe limitations by that. One specific thing that caught our eye is, of course, we try to find the term for that, the tech term is that syncs and sources. You are looking for where you can input some data, like the source itself. The sync is where data goes through and finally finds home in a vulnerable, maybe code. Through that, once you find something, you can start from the – since it can start from the sources, because it’s a very massive body of work.
You need to understand the code. If this is more than a few files, it will be difficult to navigate through and to understand where the code flows and where the data flows. In our case, we started in this case by the syncs themselves. We found a way that was a bit, I would say, fishy of looking at the data itself. We found it to do something that is beyond the expected value, something like reading a file that is not supposed to be under a directory there. We understood that if you can maybe hop to another file, you’ll be able to read those secrets that we are talking about. Furthermore, something that that was a, I won’t say a red flag, but something that as for someone like me, or as a researcher to see that screams, check this out, which is something that was specifically anti this kind of attacks.
Some part of the code was specifically built for defending from the attack that we expected to find, which was very interesting and telling about the expertise that the program is, that wrote the code had, because they understood there is a problem that should, but can be a problem. This was the start. We started there. We started to just dissect exactly what does it mean to understand how the program works and how this defenses, or walls that was put up, or where walking. Through that, you are trying to push a needle. You try to see what unexpected results on unexpected inputs can be thrown at the system, but evade this kind of mechanism that was pushed out. Eventually, we followed the sources, as we said, what kind of inputs we can give that and what kind of inputs will go through this processing, and what will evade this kind of processing.
We found exactly that a input that the programmer expected to go through the processing and to go through the filter and to be checked, but we found a input that was at the same time, legit input, which the program can read. The program was confused by it enough to bypass perfectly, the perfection that was built. The process itself, I guess, that goes a full circle.
[00:15:22] CS: Now, when you’re able to discover this type of this issue, as you said, where there’s this this level of complexity that it’s simply bypassing and just spitting out the results here. Is this something that you’re able to prescribe a remedy to this, or simply by the act of saying, “We found it. This is what it’s doing.” That that they’re able to correct the code themselves?
[00:15:48] MZ: Both. In this case, they were very cooperative. The program, as in developers that we reported to. This also go to the ethics. I want to do in any of an expert won’t go and tell everyone about this exploit, because it’s very dangerous for disclosing it to the developer, to see if they can fix that together. In this case, they were very – the triage itself, the process of understanding what this problem can do, and what is the broader impact of that was very, very quick by them. They started immediately to remediate. We started to discuss other variations maybe of the attack, so to cover more ground on that.
That was a very open discussion and a very good one, I should say. I’ve been through tens, if not hundreds of those vulnerability disclosures. This was one of the best that I found. Through that, to your question, sometimes you are the one that is recommending remediations, and sometimes it’s a corporation, and in some cases, you don’t even need to say so. The minimum thing that for me, is that at least to be able to suggest remediation and to at least, try to have this conversation, which is very important for any research that you have.
[00:17:05] CS: Yeah. Now, the particular example you used, transitions nicely into my next question here. I want to ask if you can speak to the proper method and channel of reporting vulnerabilities to a vendor, because on a past episode of the podcast, I spoke to Connor Greg in the UK, who now famously had a story about how he received a string of passwords and data from McDonald’s after winning a contest prize. When they sent him his prize via email, they ended up sending out their entire access. He spent a frantic weekend trying to find someone at corporate. They didn’t have a bug bounty program, so he spent 24 hours trying to find someone to report it to, and he had to call the US, and then they had to give him the number for the UK.
Can you tell me about what needs to happen at the reporting level? Because I know that an improper reporting can either cause you to be ignored by the company, or even be perceived as a threat. What channel should you use and what tone should you take, and how can you ensure that your findings aren’t treated as the work of a pest, or even an active threat?
[00:18:06] MZ: Oh, so this may be the – the most substantial transition that the industry went through throughout the 20, or maybe 30 years, maybe more, was the standardization of disclosure mechanisms. By that, the thing that – through the 90s, I’ve rarely disclosed, unless it was anonymous, disclosed a vulnerability to an organization, just because of the threat of legal, or you had, especially as a key, they didn’t want to go through that, understandably, I think.
Back then, I tried to do that anonymously and at all times, it was successful, which was great achievement by itself. Today, you have much more legal mechanisms in place to be able to confront that without the threat of legal action. That said, it still happens. This is also something that we are working on. Everyone working on and the community is concerned with to see that this kind of – the general time is responsible disclosure, versus full disclosure. We can also, maybe if you would like to go through the differences there.
[00:19:15] CS: Absolutely. That’s my next question. Yeah. Okay. Go ahead. Please continue.
[00:19:19] MZ: Responsible disclosure, so of course, without nitpicking, because there is a lot to say about the debate around responsible disclosure and what is responsible to begin with. In general, this is a mechanism that you have an agreement, or the organization in question has some statement of we are accepting those kinds of disclosures. This is the way to go through that. As I said, throughout the 20, maybe 15 years that this was like an explosion of options there, especially around bug bounties that even took that a step further and standardizing the way that you should communicate with a third, or maybe a mediator, instead of the organization itself. To have this triage with I would say their names, hacker one and background are the two big ones that have an amazing job of making this mediation between the researchers, hackers and the organizations themselves. Sometimes this goes through bug bounty programs. Sometimes it goes through a way of, something went off. Excuse me.
[00:20:31] CS: Oh, sorry.
[00:20:33] MZ: You can hear me still?
[00:20:34] CS: I can still hear you. Yes, no problem.
[00:20:36] MZ: Okay. Just let me – Excuse me, Chris.
[00:20:40] CS: No problem.
[00:20:40] MZ: Okay. There is a bug bounty program that this is one way to go through that. There are some cases that you should find – at least look for a security page that will explain to you what kind of process they look for and the other option. They had a wiki page, because this is a GitHub the hosted repo. On the wiki page that said, if you have any security advice, or security disclosure, please contact this email, which also is very good example. Maybe they will also define what are the minimum aspects of the disclosure that they would like to have in order to have a healthy conversation on that and not just being bombarded by claims of this is vulnerable. Go fix, without too much of a conversation.
Which also, the number one, I would say, a reason for our organization to go through bug bounty programs is to have this triage and to have the minimal set of legal expectations from all sides as well. A different, maybe the downside of responsible disclosure is once it doesn’t work. For your example, I can feel it very, very closely. I had, without disclosing any names, any vulnerability of a bypass nature on a very major company, or something that you – at least 30% of your audiences have on their desktops.
It was a very specific program within this ecosystem. I tried to contact this big company. No one answered. I tried many, many emails, many people that I try to go through and nothing happens. A couple of months later, there was an MNA of this specific organization, so I tried to contact the new organization. Nothing happened. It went like that for years. I have never disclosed it, because I didn’t find a way. I think that by now it’s fixed, because they moved from a couple of iterations of code reviews and a complete overhaul of the code itself.
[00:22:54] CS: Okay. Well, thank you for that. I’m still not quite clear on the distinction between full disclosure and responsible disclosure. I understand responsible disclosure now, but what’s the – What are you doing if you’re doing full disclosure? Is it basically you’re saying, “I found this vulnerability” and you’re telling the general public and then by extension, you’re making them look like a target to potential attackers? Is that what that means?
[00:23:23] MZ: Yes. This method, as you said, is just going public with it with full details. I want to say, this is not a chaotic – usually, the researchers if they are respected enough, they’re not doing that in intention of getting attention. If they are doing so, usually, doesn’t received so well by the community itself. Ethically, we are striving to have a responsible disclosure, any kind of responsible disclosure first. If it goes to hell, or it doesn’t work, there are some instances that succumb to full disclosure, which means public disclosure. Everyone knows about it now. Now, maybe the organization will do something that the pressure, the public pressure will make it still. There is maybe something in between, like several responsible disclosures will actually have a deadline. If you are not trying to fix that, or we don’t have a good site into the future of when it’s going to be fixed, we are going to disclose it, nonetheless, let’s say, 90 days after disclosure. This is maybe some kind of a middle ground that some people, some researchers have found.
[00:24:41] CS: Yeah. I mean, do you have any sense of whether a full disclosure – I mean, because that seems like the nuclear option in the sense of, you know. I can imagine that the shame that would be associated with that would necessarily do the trick. I mean, do you have a sense of whether just going public with it has resulted in positive results? Does it happen a lot still, or are companies starting to get the hint and offer channels for quietly disclosing?
[00:25:12] MZ: By positive results, you mean it was fixed? I guess. If this is the measurement, yes. Several times full disclosures, we saw this very quick turnaround. Even if the vulnerability was nothing, like something that you shouldn’t be very bothered about, but still, the public pressure does it. The issue is of course, there is in those “successful attempt of full disclosure,” this is also a very common ground for attackers to exploit that before the fix is there. There are some very dangerous implications of this kind of full disclosure. I won’t recommend going nuclear, as you said, just from the [inaudible 00:25:52] list.
[00:25:53] CS: Yeah. I suppose, there is also the third way that we’d only talk about is full disclosure, responsible disclosure, and then selling the vulnerability to the black market, or whatever. I mean, that’s the one you don’t want, and that’s the one that they’re most afraid of. Yeah, now that makes sense. I guess, when I think of – oh, we’re going to go public with your vulnerability that it’s like, well, you’re not getting the bug bounty. They’re going to take care of it, but they’re not going to roll off the red carpet for you next time.
I want to talk about – the purpose of the show is to help people break into the industry, learn about this type of work and these types of careers and so forth. As chief security researcher, what are some of the main security threats and threat actors that you’re currently researching and dealing with? Are there particular trends, or issues or threat actors that keep you awake at night right now?
[00:26:55] MZ: Absolutely. I’m focused on something called SDLC, or Software Development Lifecycle. The extension of that is what is called today, supply chain attacks, which you can think of it as maybe real-world supply chain, which means your goods are going to go through several hands before it’s being received by you. It’s not that different in software, so you have so many dependencies in software that will make it. You are dependent upon either services, or packages, or even code that was written by others, or a third party. Then you should trust sometime, because without that, you usually cannot write any extensive programs.
By that, this is a very modern and also very vivid, very limited area of attacks right now. You have so many people, instead of attacking you on your computer and sending you phishing emails in order to hook you in, they will try to maybe go to your third party, attack them as something of a contractor maybe, or something of a dependency repository, will hack them, will introduce malicious code. By that, you will voluntarily pull this code to your computer and run it without any suspicion, which is why these attacks are very lucrative, because you are going to run it, because you asked to run this over-executed code on your machine.
Your chances of dealing with that right now is slim, because you won’t have much of the mechanisms to protect against it. I’m not still there, at least not in the grand scheme of things. To your question, that’s where I’m focusing my efforts today. That’s what we are doing in Apiiro as well, to come with solutions to those kind of attacks and to first of all, to proactively defend against it and also, detect those attacks before it really hits the ground. That’s a thing that I’ve never rested – I’m not there yet. We have a lot of information to share. The detections that we would produce are honest, and of course, I’m really objective here. Great. Nonetheless, the attackers will also try their best to overcome those detections. This is, of course, not an ending story there.
[00:29:27] CS: Right. Right. Yeah. I imagine, that’s a harder access point for a hacker to get into, but the rewards are so much larger. Because if you’re adding that vulnerability into something, that’s going to disseminate among tens, or hundreds of thousands of people, then you have a heck of a lot bigger pay day than if you’re just focused on sending, as you said, that one phishing email to that one CEO. They might not even notice that they’re doing that.
[00:29:50] MZ: Yeah. The attacks that we have seen in the wild through the past two years are so devastating. In a fact, you can’t really put it aside. You have to deal with it. Also, the US government lately have been published an executive order specifically on supply chain attacks, because these are very, very, very concerning attacks for anyone, specifically for the US government. Also, of course, to every industry out there.
[00:30:19] CS: Yeah. Now, what is the day-to-day work of security researcher like? I mean how much of your day is spent responding to discoveries in the news, or acting on the concerns of vendors, or just going out on your own and researching whatever might be coming next? What’s your day structured like?
[00:30:37] MZ: There is no structure, I have to say. In general, as a lead of researchers themselves, I’m trying to keep a balance between their, I would say, agnostic research and their detection and product value research. I honestly believe that researchers are thriving in a research environment that gives them the intellectual freedom to invest their time and efforts into something that they are really passionate about, and not something that is just learning right now. Not always something that is just learning.
There is, of course, a very good virtue on going through a very immediate concern that need to be tackled, which happened to be today, we had we have a vulnerability that was published, that all the community, including ourselves as well, have been putting our eyes into that, and trying to scan the code. There is a spring-based framework that was allegedly being found with vulnerability. Usually, I’m trying to give those, this balance of freedom for the researchers, including myself. By that I don’t have a specific number for you, unfortunately, to say, 75%, 25%.
[00:31:46] CS: Yeah, sure.
[00:31:47] MZ: Something along the lines that you should be at least productive and find the time to do a substantial amount of work on each. Not something like an hour a week just to afford an appointment agnostic research. Something much more substantial than that.
[00:32:01] CS: Yeah. You need to really be on top of all the things that are happening, in addition to whatever you’re doing in your own work. From a work standpoint, for people who are listening, who are looking to get their foot in the door, do you have any tips or advice for students of cybersecurity, or career aspirants who want to work in either threat research, or pen testing? Obviously, it’s different from when you started in 2005-ish. What are some experiences, or self-initiated projects they should be engaging in now to make them more desirable to potential employers?
[00:32:35] MZ: Well, let me just structure it a bit. I would maybe split it into three. First of all, find your passion. Your immediate passion. What makes you really interested about knowing about how something is working? What is behind the screen there, and what can be learned from that? Go ahead and learn that. Be drive by your passions, because that’s exactly what keeps you awake, and this is something that you won’t be bored by. Keep in mind that some of that will be over the top. You won’t be able to learn it by a day, but to have this patience to learn these things maybe for more than a few months or so. You can take even courses on those subjects. This is really, I saw hundreds f researchers and glad many of them. It varies between each researcher.
Some will say, I would love to see YouTube videos and then try it myself. I would like to go through a full-on course academically maybe, or maybe somewhere else. Like an online course. By that, to try to understand as much as I can, or to write books. Myself as an old school person, I like to read books as well. Nonetheless, this is the second part. Leave a space for actual experimentation and hands-on. Without that, everything that you’ll learn and everything that you’ll learn about and you hear about will go moot. You have to have this, either a muscle memory for your actions and for your attacks, and to be able to actually find something and maybe in variation, you first can then actually knew.
The third thing I would say is that, in order to thrive as a researcher, you have to go outside your comfort zone. Once you feel very, very comfortable and you are not feeling like, I would say, even frustrated a bit, or not a bit, this means that something – that you are learning something that someone else already learned, and this is okay. This is more than okay. You should go through that. At least, once you are trying to research something know that frustration is part of the deal.
You have to deal with something that no one understood before. No one succeeded in before. Maybe it’s a code snippet you didn’t succeed in, maybe it’s an exploit you can’t write. But you will. Without this endurance, the whole industry won’t be there. These are maybe the three parts, I think, that makes a good researcher and without being specific to a specific realm, or expertise.
[00:35:31] CS: Yeah, that’s fantastic advice. I read something along that line that you should be aiming to be learning at a point where you’re just about in over your head, but not quite. You should be struggling, but not completely overwhelmed.
[00:35:45] MZ: Yeah.
[00:35:46] CS: As we wrap up today, Moshe, can you tell me about your company, the company you work for? Is it Apiiro, Apiro?
[00:35:53] MZ: Apiiro. Yeah.
[00:35:53] CS: Apiiro. Can you talk about some of the services you off your clients and some of the updates, or big things you’re looking forward to working on, or showing us in 2022?
[00:36:02] MZ: Yeah, sure. Apiiro is a startup. The startup itself have gone out of stealth in October of 2020. Since then, we won even a couple of prizes, if it’s the RSA innovation from last year. We were number one in their roster. In general, we are producing software to protect against, to have this visibility into your software development lifecycle and supply chain aspects of the SDLC itself.
By that, we are providing visibility to our customers first to what is residing within the actual inventory and applications and service that you have. Maybe it’s a cloud-based service as well, so Kubernetes, cloud native applications. Second to observability, we also give you the context for those risks. We are combining different aspects of the risks that are residing within your applications, or implementation of applications and trying to give you a whole spectrum of what the risks means. Let’s say, you have a code, the code found to be vulnerable. It doesn’t really say, or the code is being committed to the repository of the code base.
If you go through a tool and it will say, maybe there is a vulnerability there, it won’t say much without the context. We are processing the code. We are understanding the behavior of the developer to see if the behavior is eccentric, or maybe anomalous to the different developers, or specific developer group that is in mind. By that, with this context and others, we are combining that to a single risk to have a single payment of risk structure within your application base.
Second, we are measuring those remediation and those application security programs that you have. By measurement, I mean that you will have actual data to perceive and by that indicators of success to yourself. You can also combine different indicators to be able to produce indicators by your own, and maybe some something customized to your own organization. Lastly, remediation. Once you have this visibility and you have these measurements, you also try to remediate. We are giving you the options to remediate and we are working through that through that process with you with different availability times, with different aspects of security.
If as I said before, especially if you are developing rapid – in rapid development practices, you will find that this kind of remediation will be very, very fast. We’ll need to be very, very fast. We are providing something that is being called by the industry, a shift left. Meaning, that we believe that if you are remediating that kind of vulnerabilities as soon as possible in the development chain, the cost for that and the effort for that will be minimal. By that we are providing this shift, even to the design level, before any code has been written. All of that is being combined into Apiiro, and that’s basically it.
[00:39:11] CS: Very good. Well, thank you again for your time, Moshe. This has been a really great discussion. I have one last question for you. If our listeners want to learn more about Moshe Zioni, or Apiiro, where should they go online?
[00:39:23] MZ: I’m mostly active on LinkedIn and Twitter, under the name Dalmoz. If they want to contact me personally, firstname.lastname@example.org. Feel free to contact me. I’ll be happy, more than happy to have a conversation with you.
[00:39:43] CS: Oh, wonderful. Moshe, thank you so much for joining me today. This has been a really, really great talk and I appreciate it.
[00:39:48] MZ: Chris, it was amazing. Thank you very much.
[00:39:50] CS: As always, I’d like to thank everyone who has been listening to, watching and supporting the show. New episodes of the Cyber Work Podcast are available every Monday at 1 p.m. central, both on video at our YouTube page and on audio wherever you get your downloads. I want to make sure that you all know that we have a lot more than weekly interviews about cybersecurity careers to offer.
Thank you very much once again to Moshe Zioni and Apiiro, and thank you all so much for watching and listening. We will speak to you next week.