AI is going to do a lot of interesting things in the coming months and years, thanks to the detonations following GPTs. But one of the most important changes will be the replacement of our existing software.
AI-based applications will be completely different than those we have today. The new architecture will be a far more elegant, four-component structure based around GPTs: State, Policy, Questions, and Action.
Fundamentally it’s a transition from something like a Circuit-based architecture to an Understanding-based architecture.
Our current software is Circuit-based, meaning the applications have explicit and rigid structures like the etchings in a circuit board. Inputs and outputs must be explicitly created, routed, and maintained. Any deviation from that structure results in errors, and adding new functionality requires linear effort on the part of the organization’s developers.
Circuit isn’t the perfect metaphor, but it’s descriptive enough.
New software will be Understanding-based. These applications will have nearly unlimited input because they’re based on natural language sent to a system that actually understands what you’re asking. Adding new functionality will be as simple as asking different questions and/or giving different commands.
(record screech sound)
“Nope! GPTs don’t have true understanding…”
⏱️ 60 seconds starting now…
There are many who don’t see what GPTs are doing as actual “understanding”, but I think I can convince most holdouts in less than 60 seconds. Ready? Go.
First, let’s define terms.
Intelligence is something like, “The collection, analysis, and sharing of information on the enemy to enable superior decision-making.
- Data: Raw facts or figures without context
- Information: Data that has been processed, organized, or structured
- Knowledge: Information that has been combined to become useful
- Understanding: The ability to apply knowledge in new situations and contexts
Or in compacted form:
- Data is raw
- Information is organized
- Knowledge is useful
- Understanding is applied
So the definition we’re using is:
The ability to apply knowledge in new situations and contexts.
So, can GPTs do this? Let’s respond with an example. Do they understand concepts like Star Wars and Forbidden Love and Shakespeare enough to write the following poem?
Write a love song from Luke to Leia.
I asked it to do that, and here’s what it wrote…
O Leia, fairest of all maidens fair,
Thy beauty doth my very soul ensnare.
Thy shining eyes, like stars that light the night,
Doth fill my heart with love’s sweet, shining light.Thy gentle touch, like feathers on my skin,
Doth make my heart to dance and leap within.
Thy voice, like music to my eager ears,
Doth soothe my soul and calm my deepest fears.And though our love may be forbidden, true,
I cannot help but give my heart to you.
For thou art all I ever could desire,
My Leia, thou hast set my heart on fire.So let us steal away into the night,
And let our love take wing and take to flight.
For in thy arms, I know I shall find peace,
And all the world’s troubles shall surely cease.A LONGING GPT
Try it yourself and add new contexts/situations to the mix.
That’s a Shakespearean poem, about Forbidden Love, in the Star Wars universe, between two Siblings. And it’s applied to a completely new situation/context that I just made up.
Awkward
Notice that I didn’t even mention Star Wars or Forbidden Love in the prompt! It understood the meaning of “Luke and Leia”, and “Love”, and it inferred that it was forbidden because it knows siblings aren’t supposed to feel that way about each other. This is commonly known as understanding the poem and its contents.
A lot of the confusion about GPTs and whether they understand things comes from the conflation of understanding with experiencing.
Do GPTs understand things? Yes. The magic of the tech is that GPTs have to accidentally learn concepts in a deep way so they can properly predict the next letter in a sequence. And it can then apply those concepts in new situations. That’s understanding.
But does a GPT know what it feels like to understand love? Or what it feels like to contemplate the universe? Or human mortality? No. They don’t have feelings. They’re not conscious. They don’t experience things one little bit.
If you argue that you must feel to understand, then you’re saying understanding requires consciousness, and that’s as big a chasm as Luke took Leia across.
But we’re not asking GPTs to experience things, we’re asking them to learn a concept and then apply that concept to new situations and contexts. That’s understanding. And they do it quite well.
Software that understands
It’s difficult to grok how big the difference is between our legacy software and software that understands.
I say “something like” because the exact winning implementations will be market-based and unpredictable.
Rather than try to fumble an explanation, let’s take an example and think about how it’d be done today vs. in the very near future with something like an SPQA architecture.
A security program today
So let’s say we have a biotech company called Splice based out of San Bruno, CA. They have 12,500 employees and they’re getting a brand new CISO. She’s asking for the team to immediately start building the following:
- Give me a list of our most critical applications from a business and risk standpoint
- Create a prioritized list of our top threats to them, and correlate that with what our s