Sometimes I wish I would have studied Philosophy in college. Philosophers get to study, discuss, and debate cool things like free will, intention, and Zombies.

Wait… What?

The so-called Philosophical Zombie (P-Zombie) is a hypothetical construct used in certain thought experiments. The basic concept is that the P-Zombie can look and behave like a normal human being, but there is no sentience on the inside. So, for example, the P-Zombie can be thought of as programmed to mirror or mimic human behavior. It can express love and hate even though it feels none of these things itself. It can pursue a job, make friends, and perhaps even create works of art. But internally, there is no consciousness or free will; the P-Zombie is merely a hunk of meat following highly complex programs.

One of the fascinating P-Zombie thought experiments I read about years ago was whether or not one could create a test that would distinguish a P-Zombie from a human being. What can a sentient being do that a non-sentient being cannot? If we cannot devise a test to measure it, does consciousness even exist?

But most of us aren’t philosophers and probably don’t spend out time debating if we’re conscious or not. On the other hand, from these more esoteric discussions we can derive more practical questions. For example, consider the problem of determining intention. That is, beyond determining a person’s actions, can you determine what they intended to do? If so, how?

Given that we don’t have a computer terminal plugged into each other’s brains (yet), determining intention is a hard problem. This is especially true when there are negative outcomes. If someone is responsible for something bad happening, how can we know if they acted maliciously (intending to cause harm) or innocently (causing harm without intending to do so).

Consider the issue of mistakes within the context of computer security. Mistakes that can be exploited to gain unauthorized access to a system are often called vulnerabilities. It is particularly difficult to defend against hackers abusing vulnerabilities because even if everything is configured correctly, and used properly, may vulnerabilities bypass one or more defensive systems as if they weren’t there. Because of this, nations and criminal organizations alike purchase information about how to exploit vulnerabilities. It’s not uncommon for the government to pay $50,000 or more for a working exploit against a vulnerability.

Think about that for a minute. Someone can make $50,000 for knowing about a subtle mistake in software and knowing how to exploit it to compromise security. So… isn’t that an incentive for engineers to purposefully insert errors into software?

Returning to our P-Zombie experiment, let’s imagine two hypothetical actors. We will call the first a Security Double Agent and he or she intentionally inserts vulnerabilities. Now imagine a second actor that intends to do things correctly but fails, inserting a vulnerability by accident. Let’s call this actor a Security Zombie.

How would we recognize the difference between a Security Double Agent and a Security Zombie?

This is not theoretical. Dr. Matt Green and his team at Johns Hopkins uncovered a vulnerability in Apple’s IMessage a few months ago. I only recently had the time to read and review his findings. As I did so, I was shocked at the nature of the cryptographic weaknesses they exploited.

It turns out that my Network Security class created a cryptographic protocol with a strikingly similar weakness.

And my students created it on purpose.

In my class, I provide the students with a fake network I call “Playground.” Within this artificial construct, I have them create all of the security from the ground up. Once they’re finished, their next objective is to try and hack the systems they’ve just created. My primary goal is to teach them how hard it is to get security right, even when you’re trying hard.

My students this year decided to try hard in the other direction. They thought they would have more fun trying to hack each other and purposefully created a weak protocol to make such hacking easier.

You see? My students confessed to me that they were Security Double Agents. And the protocol they came up with was similar in design and weakness to a protocol created by Apple.

Given Apple’s track record, the most likely explanation for the weakness in their IMessage Protocol is Security Zombies. NOTE: This is not an insult or a put-down to Apple whatsoever. All engineers and designers make mistakes, and all of us play the Security Zombie at one point or another.

But critically, how are we to know? How are we to determine if a company as a corporate entity, or rogue actors within it, inserted weaknesses intentionally or not? As the financial incentives for Double Agents grow, this question will become increasingly important.