I worry about the future of Computer Security.

One of the reasons I worry so much is that the deck always seems to be stacked. The bad guys have a much easier job: they have to find one bug and we have to find them all. They have to figure out how to defeat our current security and we have to guess what they’ll do in the future. Fundamental theories of computer science prove that we can’t recognize, track, or find all possible malicious viruses, worms, or malware, but increasing computer complexity guarantees we keep giving them more and more bugs to exploit.

Worst of all, even when we “win”, all we did was prevent their attack. We didn’t hurt them, we didn’t capture them, we didn’t make it harder to try again tomorrow. Our wins are just “draws” for the bad guys. But when they win, we get hurt hard.

So, yes, I often feel troubled about the future of computer security. And I do spend quite a bit of time thinking about how it could be made better. But how do you solve something with such fundamental challenges?

I don’t have a perfect answer; in fact, I don’t know that I have any answer. But I do have a couple of possibilities that I would like to see explored.

First of all, I do believe that some amount of vulnerability can never be eliminated. In the end, there will always be a way to get past a computer system’s defenses. But, even if that is true, two key questions remain:

  1. How much of our computer security vulnerability is unavoidable?
  2. How much safer would we be just by eliminating the avoidable vulnerabilities?

These are critical questions. If 90% of security vulnerability cannot be reliably eliminated then there’s very little to do except try to manage the damage. But what if it’s closer to 50/50?

Moreover, the raw split between the two doesn’t tell the whole story. Many attacks against a system engage a chain of hacks: first you crack one place, from there another, and many more steps before finally reaching the actual goal. If an attacker has to crack multiple systems and mechanisms to reach desired data then eliminating 50% of the individual exploits could potentially eliminate much greater than 50% of the damage.

So, moving out of the hypothetical, are there classes of vulnerabilities that, if eliminated, might make a significant difference? How about passwords?

According to this report from Trustwave, 28% of all the breaches they investigated were the results of weak passwords. Imagine, at least a quarter of the attacks investigated by a security company were completely preventable. These attacks were not based on zero-days, previously unknown viruses, or heartbleed-like vulnerabilities in SSL. Just plain-old ordinary passwords led to 28% of the attacks.

And I suspect that those breaches led to other breaches. If so, the actual impact was much higher.

Can we actually prevent users from choosing weak passwords? I honestly believe that we can. I believe that users can be motivated to make good passwords and taught how to do so. Some of my favorite research in this area is coming out of the Carnegie Mellon CyLab Usable Privacy and Security (CUPS) Lab. Dr. Lorrie Faith Cranor is the director, and she and her students have been doing some great work in figuring out how users currently pick passwords and how to help them make better ones.

Perhaps more importantly, Dr. Cranor has been making strides at getting the “prevailing wisdom” changed. In addition to her research work, she is also the Chief Technologist at the FTC. Earlier this year, she blogged about rethinking the need to change your password frequently, something that has long bothered me. At least partially due to the attention she has given the topic, there have been changes to NIST’s and Microsoft’s advice on the topic.

This is really important work. According to one report, it’s millennials, not their elders, that make weaker passwords. We need to reverse this trend right now and see that the next generation of Internet users make good passwords. It’s a small, fixable thing that can make a really significant difference.