Wisdom: Not AI’s Strongest Attribute

In my teaching as an Adjunct Professor, I have adapted my exams to require Chat GPT. I’ve always given essay-based exams but I now have the students use Chat GPT as part of the exam. Typically, this takes the form of questions that the students ask Chat GPT and then they provide their critiques of the output.

As part of my process in crafting an exam, I often experiment with Chat GPT to find out questions it seems to struggle with in one way or another. For this past Fall 2023 semester, one of the questions I played around with dealt with the concept of “Defense in Depth.”

Defense in Depth is an important design philosophy that recognizes that people and systems will fail. Ergo, it would be in a defender’s best interest to not rely on just one component and/or person for maintaining security of the overall system. Said another way, no one failure should result in a catastrophic security failure.

However, the defense-in-depth concept is often abused in marketing literature or other untrained voices. The most common abuse is to simply assume that defense in depth is primarily about the number of layers. In this very wrong view, simply adding another security layer, no matter what it does or how it works, is creating more depth of the defense. Such a viewpoint would suggest that a system with four layers must be better protected than a system with two.

The problem with this view is that whatever additional layers are added must work in some kind of complementary or compensatory fashion. Knowing how to combine layers in order to create and maintain this effect requires thoughtful design and correct implementation. For example, in multi-factor authentication (MFA), it would generally make little sense to combine two passwords, even if they are different. This is because password authentication, like all types of authentication, has known strengths and weaknesses. Combining the same strengths and weaknesses does not do much for a deep defense.

On the other hand, combining passwords with some other kind of authentication, such as a biometric, for example, greatly increases the security of the authentication system. An attacker would generally need to exploit weaknesses in both types of authentication to compromise the overall authentication process.

In short, defense in depth is more than just piling layers on top of each other.

For my exam question, I thought I would see how Chat GPT responded to this issue. I started with some generic questions about ways in which the term “defense in depth” was misused. Chat GPT responded with solid, but very general, answers. But it never really addressed the issue of intelligently combining the layers. So I finally asked it about this issue outright:

user agreement

user agreement

ChatGPT then went on to list the following eight factors: Risk Assessment, Adaptability, Integration, Prioritization, Monitoring and Analytics, User Education and Awareness, Regular Testing and Simulation, and Incident Response Planning. Although ChatGPT’s immediate response is clearly addressing my question, much of the content in these eight factors did not. For example, for “User Education and Awareness” it said this:

user agreement

As you can see, this advice, while useful for thinking about intelligent design in general, does not really address the need for intelligent design in terms of making layers work correctly together.

Trying to push ChatGPT to be even more explicit, I asked:

user agreement

ChatGPT then responded with a number of examples but many of them still did not answer this question very well. Here are two of the ChatGPT examples that I thought were particularly bad answers:

user agreement

user agreement

Even though both answers are clearly good advice, generally, they are both poor answers to the question posed.

​​Wrapping up, it is safe to say that Wisdom is not AI’s strongest attribute. In its current form, ChatGPT, is proficient in laying down a foundational understanding, but it falls short when it comes to the wisdom only experience can bring. Sure, it can gather and present information with impressive speed, but it’s not quite there yet in terms of deep, insightful experience. As we continue to navigate the evolving landscape of AI, it’s crucial to remember these boundaries. For a more in-depth look, especially about how ChatGPT’s training and potential biases influence its responses, I encourage you to check out our earlier blogs. They shed more light on the inner workings of AI systems like ChatGPT.