Computer Security and Zombies III

The real problem is not whether machines think but whether men do.

The quotation above, from B. F. Skinner, is one of my favorites1. Within this pithy line lies a universe of questions, debates, and perhaps even unexplored philosophies. For my part, I believe that it also gets to the core problems of computer security.

In two previous posts (part 1 and part 2), I discussed the issue of non-thinking from the perspective of automation and prioritized consciousness. Even very focused human thought, with careful, conscious consideration, still requires massive automated operations and significant hiding of details through abstraction. In short, even when we’re thinking hard, we’re still doing a lot of non-thinking.

Based on the concept of the Philosophical Zombie (P-Zombie), a thought experiment about non-thinking humans, I described these non-thinking automatons in our brains as “zombies.” And despite their essential nature, the zombies are the source of many problems in computer security. One critical problem, if perhaps not the biggest problem, is that our automation and abstraction zombies cause many errors.

But if the zombie horde of mental automation and abstraction is essential, is there any hope for ever having real computer security?

I believe that there is; I believe that we can “leash” our inner zombie. The ideal scenario is one wherein we place our automation and abstraction processes within safe environments where they cannot do damage. When the ideal is impossible, create environments where the zombies are unlikely to do damage.

Although I believe “leashing our inner zombie” is my own novel term, the concept is not at all new. Best practices for computer security engineering already include the following concepts:

  • Eliminate entire categories of errors
  • Minimize exposure
  • Simplify designs
  • Fail safely

Each of these ideas aims to limit or eliminate the damage that our non-thinking can do. For a thorough discussion, I highly recommend Ross Anderson’s book Security Engineering, which is freely available online.

But I think that there are even more powerful ideas on the horizon. One of great personal interest to me is “formal verification” methods to prove that a computer program correctly implements its design specification. In other words, we can prove, mathematically, that no automation zombies introduced any errors into the system between the design and implementation stage.

So what’s the catch? If we can do these kinds of proofs, why don’t we do it with all software?

Sadly, it turns out that most programs cannot be easily verified and some cannot be verified at all. Sometimes, a system can be partially verified or verified only under certain assumptions. Such incomplete verifications can lead to a false sense of security, perhaps making things worse than before. More than one company has payed the price for announcing to the world how “unbreakable” their software is.

Fortunately, research continues and formal verification continues to improve. As a powerful example, seL4 is an operating system kernel that has been formally verified. This is really exciting because the kernel of an operating system controls most, if not all, of the dangerous stuff! If we can make the kernel secure, we have a real chance at a truly secure system! Here is a key explanation from their FAQ:

...the ARM version of seL4 is the first (and still only) general-purpose OS kernel with a full code-level functional correctness proof, meaning a mathematical proof that the implementation (written in C) adheres to its specification. In short, the implementation is proved to be bug-free (see below). This also implies a number of other properties, such as freedom from buffer overflows, null pointer exceptions, use-after-free, etc.

The formal proof has some limitations, as discussed further in the FAQ. The creators of sel4 very explicitly disclaim that simply using their kernel will automatically make a system secure. But the formal proof guarantees that no hacker will break into the kernel using a buffer overflow attack, for example. (See also the sel4 whitepaper.)

Returning to our discussion of inner zombies, I believe that solutions like formal methods provide the best leashes. Using these mathematical proofs, we can chain the non-thinking horde in our brains to ensure that it does not introduce vulnerabilities. Perhaps we cannot live without them, but with a little help from mathematics, we can herd the zombies a little more effectively.

[1] Amusingly, Skinner did not believe in “free will”, so it’s not entirely clear what he meant by “thinking.”

Ready to learn more?

Fill out this form

or email us at