What will computing look like in 100 years?

Of all the questions that perplex me, the one that concerns me the most is how much of our future computing resources must be wasted on matters of security. And yes, I mean wasted. Consider how much energy goes into not producing or developing things. Think about all the researchers that spend their careers analyzing how bad people did bad things, or trying to stop bad people from doing bad things. What amazing discoveries might they have made in a peaceful world where there was no deliberate theft or intrusion of privacy. Oh, how they might have used those intellects differently!

It’s a dream, of course, as there will always be people willing to lie, steal, and even murder. But our world’s history has seen a few swings between different ends of the war/peace spectrum, and the impact on human prosperity and development is not subtle. One of my favorite examples is the building of cities during the Pax Romana. Many cities constructed at this time had no walls and were built at the intersection of easy trade routes. After the government collapsed, Western Europe broke into warring tribes, cities, and mini-states, and almost every city needed walls of some kind. More troubling, many of the most important cities were built in defensive locations at the top of hills and mountainous areas.

When you consider how people moved their goods at this time, the impact of relocating major metropolitan centers up to hills was just short of catastrophic econmically. That security decision, effective from a survival perspective, factored negatively and heavily into the cost of every good and service sold.

There is, in my mind, a worrying possibility that humanity’s cyber world could suffer the same fate. What if the amount of security required to keep our Internet, or maybe even desktop computing, operational drastically reduced it’s utility? There are warning signs that we might be moving in that direction.

Consider this blog post from Naked Security and the White Paper from Sophos it references. They deal with an attack on Microsoft Office products that has not been eradicated in four years. Both the post and the paper are great reads and only require moderate technical knowledge to get the important points. The paper, in particular, outlines an attack that can be used in Word and Excel documents for certain versions prior to, and including, Office 2010. It describes how defenders have spent these four years attempting to clearly identify the attack code so as to block it. Every time they came up with a kind of “signature”, the crooks would figure out another way to disguise it.

But the blog post made an interesting recommendation that, from a certain point of view, worries me:

Consider using a stripped-down document viewer. Microsoft’s own Word Viewer, for example, is usually much less vulnerable than Word itself. Also, it doesn’t support macros, another Word-based malware trick commonly used by ransomware.

Using a stripped-down viewer, like Word Viewer, is excellent security. But it is also a sign that security is getting so out of control that the only safety is in using stripped down software. Could that trend continue? Is it possible that there’s a future wherein the average user is only permitted access to low-utility software?

It’s a troubling but not inconceivable thought. Certainly a truly world-wide computing catastrophe could cause such changes, but so could an ongoing, but nearly imperceptible, shift. Year after year we might hire more security experts, invest more in security research, spend more on security software, and lock away functionality until the weight of computer security becomes unsustainable and we collapse into our own cyber dark ages.

What is the solution? I don’t claim to have the answer to that. Far beyond the technical concerns, there are political, social, economic, and cultural issues at play. But I believe that by being aware of this hypothetical future, we have the best odds of preventing it.