I really enjoy teaching the Network Security course at Johns Hopkins University. It’s a privilege to work with the students and to spend time thinking about the fundamental principles behind my profession.

The best reward of all, though, is when former students send me an email about applying lessons from the course in their careers. Or, as it turns out last night, in their apartment buildings:

Hey Dr. Nielson, I moved into my new apartment complex a few days ago and noticed something about its security that reminded me of you and your lectures. We have key fobs for badging into the building that also double as our apartment keys. For some reason, the first floor elevators require you to badge in with your fob before you can call an elevator. However, if you take the stairs to the second floor (which doesn't require badging, presumably for fire purposes), the second floor elevators do not require you to badge to request an elevator. Physical security! The first thought I had when I saw that the 1st floor elevators required badging was, "huh...wonder if I can get around this." So thanks for teaching us to think like an attacker...

Obviously, I’m very excited about having students come out of my course thinking about how to bypass security. This kind of thinking is absolutely essential to the development of better security. The bad guys are always going to be thinking of ways to get around security technology, and unless the good guys are doing the same there’s no chance of stopping them.

But the best part of my former student’s message is that he identified a key principle as part of his analysis. More important than simply discovering that he could get an elevator on the second floor without a badge, he also formed a hypothesis as to the reason behind it. In this case, my student guessed that it had to do with how the fire code applies differently to different floors. This is a very reasonable guess.

It is quite common for computer security to breakdown at the “boundaries.” I do not mean the “edges” of the system, but the internal dividing lines between different technologies, components, programs, users, equipment, and even regulations like fire code. Each one of these pieces has its own context in which the security is framed and makes sense. At the boundaries, the context often changes and what was appropriate security before is no longer so. Getting the translation right when combining these security contexts is an overwhelmingly difficult challenge.

Security designers need to understand this boundary problem and expend sufficient resources designing the interfaces between two or more security contexts. Offensive security professionals, such as penetration testers, need to have systematic and thorough methods of finding boundaries, understanding the associated security contexts, and how differences between these contexts can be exploited. In either case, it is worth noting that boundaries are not always technological and not always obvious like, for example, the different fire codes that apply at different levels of a building. It often requires a fair bit of professional experience to identify these more subtle examples.

But not always! Sometimes, it just takes a clever college kid with a small amount of training and a lot of curiosity!