Lies Security People Tell — False Equivalency

Rafal Los
6 min readDec 22, 2021

Have you ever had someone in the cyber security world point to an article about a class-action settlement from a data breach, or a cost-of-breach study and say something like this?

See, that breach cost them $200M, if they had spent half of that on a security budget this wouldn’t have happened

I have. Far too many times. It’s bullshit.

It’s one of those things smug but clueless of how the real world operates security people say. They say it to justify why they need the budget to buy some new widget or hire another vendor or consultant. Or whatever, the reason doesn’t actually matter. What matters is that people take this as indicative of how security professionals actually think. In fact, it demonstrates remarkable ignorance of the realities of the profession we work in.

Let me explain.

While I think we agree far too many companies spend far too little to secure their critical assets to a reasonable level, it’s completely false to say that if you simply add to your budget you won’t get breached and incur associated costs.

Security isn’t a zero-sum game

Spending $100 more on security does not have a direct impact on reducing risk, per se. Adding anti-malware tools to everyone’s laptop in your company does not guarantee that those laptops won’t get ransomware or other malware dropped on them. It simply reduces the risk, but that’s only if you pick a competent vendor, select the optimal controls, and operationalize effectively.

That’s a lot of ifs.

Translated to the real world, it’s sort of like saying that if you wear seatbelts properly in a sound vehicle you will reduce your risk of severe injury or death in an accident — but it does not prevent the bad result. It’s really risk reduction. How much you reduce depends on what formula you use, or what you believe. But we can all agree you reduce risk by some measurable degree.

This is the most important point here. You can’t say “Well if they had installed a WAF, they wouldn’t have had their web app compromised by that well-known attack.” Even if you couch it by using “well-known” as the qualifier — you still can’t say that straight-faced with any certainty. I say that as someone who’s installed WAFs at scale in very large companies and watched the best tech in the world get foiled by poor configuration, “accidental exclusion”, or poor operationalization.

If you’re a security professional who says things like the above. Stop now. If you’re a vendor who says things like this in marketing literature or as part of a sales pitch — you should be ashamed of yourself.

Adversaries are human and creative

To further caveat the above section, even the best tech, implemented and monitored to optimal levels, can be bested by human creativity. I mean that sincerely. There are two ways that manifest.

First is the creativity of humans on our side of the table. I recall one such case where we had implemented some brand new IPS (Intrusion Prevention System) tech for a customer. Designed, configured, verified, and optimized — with everything we could think of. It was a beautiful design that should have detected and stopped known malicious activity to a particular project that held sensitive information. But then that project was compromised, and that data was stolen. Did the tech fail? No… just that an ops team member noticed that the IPS was causing some latency while it was expecting traffic so they routed the web app around the IPS segment, completely bypassing our security. Creativity won, and the app was faster, and then it got breached.

Second, creativity by attackers and threat actors should be no surprise. When someone’s job becomes outsmarting you to make money, or in some cases survive, they’ll best you. This is especially true of nation-state-sponsored threat actors who are brought in to do a job, at the risk of being killed or worse. That’s a powerful motivation. Bad actors can get unbelievably creative, and do things that some of us “good guys” would just not think of because our brains just don’t work that way. In fact, this is why “red teams” are so valuable for advanced and mature security practices, they think and act in ways that I dare say traditional defenders do not.

To err is human, to arr is pirate

People make mistakes. That’s what makes us human. If you’re hoping that that “Machine Learning powered AI security” to save you, keep waiting sucker. People will keep making mistakes, and us telling our users that they just need to “stop clicking stuff, idiot” makes us look stupid. Case in point, I can tell you anecdotally that when phishing exercises were conducted across the company a few jobs back, the people who thought of themselves as the most security-savvy (aka, arrogant security folk) were the most likely to click on something nefarious. Why? Arrogance, and the “I won’t fall, victim, I’m smarter because I’m infosec” mentality.

Look, people will follow links they probably know they shouldn’t and download and open attachments they have a pretty good idea are bad. That’s just the way it is. You can shout into that wind, or you can deploy a strategy that doesn’t depend on altering fundamental human behavior.

All the training, all the tools, and all the restrictions are no match for a momentary “oops” from an over-worked, highly stressed, and tired CFO. Rather than fighting human nature, we as a profession need to acknowledge it and build our models to work with, not against, human behavior.

Identify, Protect, Detect, Respond, and Recover

Bold statement: No one part of the NIST model is more important than the others.

If you can’t identify your assets (aka, know what you’ve got to protect) you’re going to do a shit job protecting those assets. You can’t possibly protect what you don’t know exists. Just like if you’re ignorant of the fact that one day you will have a critical system be compromised, encrypted, and held ransom, you’re going to be in for a hell of a bad time when it happens.

I don’t plan on ending up in a bad way health-wise from some terrible accident, and yet I buy insurance that will support my family if and when that happens. We need to start thinking of cyber security in more real-world terms than the absolutes I still hear being discussed at conferences and on Twitter. (Yes, I know conferences and Twitter aren’t the real world, but these people are part of your SOC, your sales team, and your execs, so you have to address it.)

The reality is that you have to spend proportionally to your capabilities in each of these five areas. We’ve spent 20 years pumping money into “protect” (otherwise known as “prevent”) and get angry when prevention fails and we’re left with chaos and gnashing of teeth. Invest in detection and response. Invest in identification. Invest in recovery capabilities. These are all areas you must have proficiency and operational expertise in — but you can’t convince me that if only I had done X better, we wouldn’t have to spend Y later when disaster strikes. That’s just not how it works.

TL;DR

You made it through another slightly ranty post, sorry it got away from me as I was writing this. It’s simple, stop making false statements that are hurting cyber security’s credibility. The end.

--

--

Rafal Los

I’m Rafal, and I’m a 20+ year veteran of the Cyber Security and technology space. I tend to think with a wide-angle lens, and am unapologetically no-bullsh*t.