Industry insights

6 cybersecurity truisms the industry needs to rethink

Ali Hadley
September 19, 2022 by
Ali Hadley

A lifelong hacker with more than 15 years of experience in security, the current BISO of S&P Global Ratings, Alyssa Miller [note: since this podcast was recording, Miller has changed positions and is now the CISO of Epiq Global] penned a multi-tweet broadside earlier this summer, calling out “reckless” cybersecurity platitudes. 

From inaccurate to dismissive to downright offensive, Miller explains how the most common cliches — like “users are the weakest link in the security chain” — don’t just displace blame for major cybersecurity issues. They polarize the teams that should be working to solve them. 

ChatGPT training built for everyone

ChatGPT training built for everyone

We've created a training video and supplemental resources to educate every employee on how to use AI tools securely. Meet with a member of our team to get started.

Hear Miller interrogate six of the most tired truisms — and offer real solutions to the issues they overlook — on this episode of the Cyber Work Podcast.

 

#1."It's not if you get breached, but when"

 

This is a common shorthand tossed freely around the industry for the understanding that even the most secure systems can still be susceptible to hackers. It’s also one of the main reasons upper management can be hesitant to allocate more money to cybersecurity solutions. 

Why? Because no one wants to invest in an issue that’s “unfixable,” especially when the person tasked with keeping an enterprise safe is waving the white flag before the battle even starts!

“On the surface, that statement is 100% accurate,” Miller explains. “But stop saying it. Everyone who is hearing it is being impacted in a very different way, and it’s actually very counterintuitive and counterproductive to what we’re trying to do.”

Instead of focusing on what cannot be done, i.e., keeping every intruder out of the network Miller says to flip the script. “Focus on resilience,” she advises. “Talk about how we detect and respond to attacks. How we put tools in place to detect them, to limit them, and to analyze them.” 

Miller says, “it’s not about preventing the breaches altogether. It’s about limiting the risk of them and being able to respond when they do happen.”

A subset of the truism above is the common tactic after a breach: “Don’t let a good crisis go to waste.” Security teams who suffer a breach or downtime and then ask for more funding run this risk of devaluing the entire security operation, causing the C-Suite to look for other mitigation options, such as cyber insurance. 

In the event of an attack, show stakeholders how you stepped up to minimize damage. Tell them what tools you used and learned, and note that calculated investments might help explore the issues more. “When you show the positive impact of your past investments, they’re more willing to give you money to invest in new stuff,” Miller says. 

 

#2. “Just patch your shit"

 

For what feels like forever, patches have been viewed as the end-all-be-all in the cybersecurity community. This single variable determines whether an organization will be breached or not. And while they do help to prevent significant attacks, Miller says this overemphasis on patching is a mistake for two key reasons. 

First, it places all the blame on a singular part of a larger, holistic security strategy, completely overlooking the other mitigation controls that should’ve been implemented. 

And secondly, it makes the people in charge of patching look incompetent, which isn’t true. “Our operations teams, our SREs, the people who are down there daily doing patching, it treats this like it’s simple,” says Miller. “It implies that those people are just being lazy or they’re not doing their job well, which is a complete line of trash.”

Instead of putting all your incredibly valuable security eggs in one basket, Miller reminds us all to go back to the basics. Apply best practices. Deploy the proper devices to mitigate damage — as all security professionals should. After all, Miller points out, sometimes vulnerabilities aren’t known, and thus, patches aren’t even available. (Remember the infamous Log4j vulnerability?) 

For starters, Miller recommends you have the necessary Web Application Firewalls (WAFs) and controls to shut down outbound Internet traffic. That way, you can quickly limit damage in the event of an attack. 

“I know there are people out there right now saying, ‘Well, I can’t get money for a WAF, or my business won’t let me install it. Or we can’t do network segmentation, or this, that, the other thing.’ But, I don’t accept those excuses,” Miller says. “Our job as security people is to communicate the needs for these controls in a way that resonates with the business and motivates them to action. That’s on us.”

 

#3. "Users are the weakest link"

 

In addition to devaluing people as professionals, this commonly used phrase implies that cybercrime would disappear if humans weren’t involved. But, Miller argues, we actually need people to learn how we can improve security measures. 

“First of all, the statement itself is one small step above ‘users are stupid,’ right?” Miller says. “When we say, ‘users are the weakest link,’ we imply this immediate division between our user base and the security team, which is exactly the opposite of what we want. You want a very interactive relationship. They’re learning from us and here’s the kicker, we’re learning from them,” Miller explains. 

Instead of pitting the security team against the rest of your organization, Miller suggests a collaborative approach, starting with your own communications. That way, you’re not encouraging the exact behavior you want to avoid. 

Emails from certain environments, for example, can look a LOT like phishing attacks; no personalization, no message, just a ticket ID and a link that says CLICK HERE. 

If you don’t want people to mindlessly click on links in sketchy-looking emails, Miller says you should redesign these notifications, taking the employee’s workflow into consideration. 

“Most of the times that people fall victim to a phishing attempt, it’s because they’re in a rush, or they’re trying to get through the processing of the 450 new emails they got in the last day,” she says. Not because the user is stupid.

Whatever the issues at your organization may be, Miller reiterates that working with the user is key. Try to understand how they use certain tools, what could be done differently, and what pain points your people have. 

 

#4. "Security is everyone's job"

 

While teamwork often does make the dream work, Miller says this common platitude has more negative consequences than it may seem. 

On the one hand, this idea of shared responsibility forces other teams — developers, engineers, operations — to account for issues that aren’t exactly in their job description. 

On the other, it gives security pros the authority to make their tasks first priority, which often creates bottlenecks for everyone else. 

In the same way that assembly lines run factories, DevOps must follow a similar pipeline to ensure the software is quickly and properly deployed across an organization. 

Whether building the code, ensuring the stability of an environment or something else, each team is responsible for completing their assigned task promptly. That way, the rest of the pipeline remains intact. 

The problem with making security “everyone’s job,” however, is that the security team can come in at any phase of this process and demand additional controls, which ends up slowing the whole operation down. But it’s okay because security is everyone’s top priority, right? Wrong. 

While Miller agrees that shared responsibility isn’t inherently wrong, it is one-sided. It’s unfair. And it needs to change. 

To ensure projects are completed on time, security teams need to practice what they preach and take on some of the shared responsibility. They need to be more courteous, think bigger picture, and do what they can to ensure they aren’t bogging things down. 

“If we’re going to say that security is everyone’s responsibility, then you know what? Pipeline efficiency is everybody’s responsibility,” she says. “Production availability and stability is everybody’s responsibility, including ours. We can’t walk away from that.”

 

#5. “Use a quality gate"

 

Designed to ensure every step of the pipeline meets specific criteria, quality gates can often do more harm than good, says Miller. 

While you may insert a quality gate — like a code scan — to ensure your build is free from errors, it often takes days to get feedback, which puts the entire pipeline on hold. Well, that’s inconvenient. But it gets worse. 

If you receive feedback that there’s a critical error in your code, the quality gate requires you to go back and fix it immediately, which sends the entire project back in time. And that’s not how a pipeline works. “Pipelines go one direction,” says Miller. “Quality gates threaten to undo all of that and be sent all the way back.”

Instead of stopping the flow every time that there’s an issue, Miller says you should stay focused on pipeline efficiency and CICD, continuous integration and continuous development. That way, everything keeps moving. 

“This is the thing that scares the living shit out of security people,” says Miller. “Deploy that stuff with the vulnerabilities in it. Don’t stop them. If your pipeline is efficient enough that you can fix it in a day or less, think about that from a risk perspective. Run the scan, identify the vulnerability, then flag it to be fixed in the next cycle.”

While this approach may seem like a hot take to the security community, Miller reminds us that mitigation controls should be in place that limit potential damage. And they can be automated to put up proper safeguards so no, “us,” i.e., humans, don’t have to. 

“Maybe you’ve identified a certain type of attack, so now you deploy a web application firewall rule, or whatever you’ve got to do. There are ways to put those mitigations in place to further reduce the risk,” she says. “The point is you haven’t stopped the pipeline from flowing. That’s what’s crucial here.”

 

#6. "You just need passion to get hired"

 

Anyone on the hunt for a job has probably heard this time and time again, but Miller says to be wary of employers who use the “p” word in describing the ease of getting a job in cybersecurity. “People are not hiring for passion, no matter how much they claim they are,” she says. 

“Especially if companies are putting out these massive job descriptions with laundry lists of different technologies and things that you have to know. If they just want to see your passion and say they can teach you the technology, then why are those requirements there?”

As an employer, Miller suggests “making the job description as inviting as possible.” That way, people are encouraged to apply, not intimidated. 

“Rather than telling people they need to have five years of experience working and writing queries in Splunk,” Miller suggests, “you should focus on the transferable skills that they might have from a different job, maybe not even tech related, that would fit.”

For aspiring cyber pros, she says not to get discouraged. Don’t read a job description (or title) and say, “Oh, I’m not qualified for that.” 

There are jobs out there for you. It might just take a little reading the subtext to understand how your unique skill set would be an asset. 

For an in-depth breakdown of these common cliches, check out Cybersecurity has a marketing problem — and we’re going to fix it with Alyssa Miller.