|
Getting your Trinity Audio player ready…
|
Introduction: The Trouble with Defining Things
When humans describe something, we tend to talk about what it is.
- A cup is something you can drink from.
- A chair is something you can sit on.
- A plan is a set of steps to get from here to there.
But logic, especially in its strict symbolic form, forces us to also define something by what it is not.
- A cup is not a chair.
- A chair is not a rock.
- A plan is not a random list of actions.
This might sound trivial, but when we translate it into the language of computers, the problem becomes enormous. Every action, every fact, every property must be encoded not just positively but also in relation to all the negatives it excludes. That is the heart of what philosophers of AI call the frame problem: how to handle all the things that do not change when something does.
In the latest AI research, especially the paper Teaching LLMs to Plan, we see the return of this problem in modern clothing. Large language models (LLMs), the neural pattern-recognizers of our time, are being trained to act more like symbolic planners: to reason step by step, to check whether preconditions are satisfied, to validate each action’s effects. It’s a powerful move, but it also drags us back toward the old burden of defining the world in terms of both what is and what is not.
This essay explores why this matters. We’ll unpack the frame problem in plain language, show how it reappears in today’s attempts to bolt symbolic logic onto neural models, and then broaden the view: what does this tell us about life, intelligence, and entropy as information management? Ultimately, we’ll see that while logic requires exhaustive definitions, life thrives by ignoring most of what is not — focusing instead on the thin thread of what matters.
Part 1: A Short History of the Frame Problem
In the 1970s, AI pioneers created symbolic planning systems. These were programs that could reason about the world by representing it as a collection of facts and rules. For example:
- Fact: “The cup is on the table.”
- Rule: “If you pick something up, it is no longer on the table.”
The goal was to chain rules together into plans: “If I want the cup on the shelf, I must pick it up, then put it on the shelf.”
This works fine for toy examples. But soon researchers noticed a hidden cost.
If you pick up the cup, does the table vanish? Does the cup change color? Do the walls collapse? Of course not. But unless you tell the system that those things remain unchanged, it cannot know. To model the world accurately, the program must carry around an enormous list of “everything else stays the same.”
That’s the frame problem: the impossibility (or at least impracticality) of exhaustively specifying what does not change every time something does. In philosophy, it became a metaphor for the human condition: how do we know what to ignore? How do we know what is irrelevant? How do we filter the infinite “nots” that surround every “is”?
Part 2: The Return of Symbolic Rigidity
Fast forward to 2025. We have LLMs like GPT-4 and LLaMA-3, capable of writing essays, solving math, generating code, and even showing some reasoning. They don’t explicitly encode the world in rules; they work statistically, predicting the next word in a sequence based on vast training data. They’re fuzzy, flexible, and surprisingly powerful.
But they’re unreliable planners. If you ask them to produce a sequence of actions that follows strict rules — like a robot stacking blocks or delivering packages — they often fail. They hallucinate steps, forget conditions, or invent impossible moves. They’re good storytellers, not good accountants.
Enter PDDL-INSTRUCT. This new method tries to bridge the gap by training LLMs to think like old-school planners. Instead of just generating a plan, they must justify each step:
- Check preconditions.
- Apply effects.
- Verify that nothing breaks.
And crucially, each step is externally validated by a rule-based verifier. The model is taught not just to “sound right,” but to be right according to symbolic logic.
This is powerful. The results show big improvements: from 20–30% correct plans up to 90%+. But it also reintroduces the same structural burden as classical AI: every step must account for not just what changes, but also what does not.
Part 3: Defining by Negation
To see why this matters, let’s reflect on your observation: “this relates to the notion of defining everything that something is not.”
Imagine defining a cat:
- It is a mammal, it purrs, it hunts mice.
But also: - It is not a dog, not a lizard, not a toaster, not an alien spaceship.
In human conversation, we don’t bother with the negatives. We assume context fills in the rest. But in symbolic systems, the negatives are unavoidable. To keep track of the world’s consistency, you must also track what doesn’t happen:
- When the cat jumps on the bed, it doesn’t cease to exist.
- The bed doesn’t vanish.
- The house doesn’t explode.
Every positive action implies a cascade of negatives. Symbolic planning must spell those out, or at least encode a convention (“everything else remains unchanged”). This convention works only because someone already thought to include it. In other words, we’re always back to managing the infinity of what is not.
Part 4: Life as the Art of Ignoring Nots
Here’s where the entropy/information perspective helps. Life, in this view, is about preserving and propagating information. But crucially, it does not waste energy encoding every possible negation. Instead, it filters. It compresses. It pays attention only to what matters for survival and reproduction.
- A bacterium doesn’t need to know that the entire universe isn’t sugar. It only needs to know where sugar is.
- A human doesn’t need to track every unchanged property of the world when they move a cup. They only need to remember the new location.
- The brain’s genius lies in throwing away information, not cataloging everything.
In terms of entropy, this is efficiency: reducing Shannon entropy (uncertainty) only in the narrow slice of the world that matters, while ignoring the vast Boltzmann sea of irrelevant possibilities.
Symbolic AI, by contrast, tries to nail down the entire frame: every fact, every non-fact, every consequence. It is brittle because it cannot ignore. It must carry the burden of what is not.
Part 5: Neural Networks vs. Symbolic Systems
This is why LLMs felt like liberation compared to symbolic AI.
- They don’t define the world in terms of strict rules.
- They operate statistically, using patterns instead of exhaustive negations.
- They can handle vagueness, defaults, context, and common sense without being explicitly told “the table does not disappear.”
But this fuzziness also makes them sloppy planners. They can’t guarantee correctness without some form of grounding in rules. That’s why the symbolic layer is being reintroduced: to tether the imagination of LLMs to the hard bones of logic.
So we are caught between two modes:
- Neural: Efficient, flexible, context-sensitive, but imprecise.
- Symbolic: Precise, rigorous, but heavy, brittle, and haunted by the frame problem.
The current research shows that hybrids can work — but at the cost of reawakening the old specter of defining everything in terms of what is not.
Part 6: Entropy, Negation, and Information
Let’s zoom out further. Why does logic insist on defining by negation? Because logic is built on binary distinctions: true vs. false, yes vs. no. To define one thing is to exclude all others. Each definition slices the universe into “is” and “is not.”
Life, by contrast, deals in gradients and probabilities. It doesn’t exhaustively negate. It rides the flow of entropy, picking out local order against the background of chaos. The evolutionary genius of DNA, proteins, and neural networks is their ability to store and propagate only the information that matters, while ignoring the rest.
From this lens:
- Symbolic AI is like an accountant who insists on listing every transaction, even the ones that didn’t happen.
- Neural AI is like a gambler who plays the odds, often getting it right, sometimes getting it wrong, but always moving fast.
- Life is somewhere in between: efficient enough to ignore most of the “nots,” precise enough to maintain continuity of function.
Part 7: The Human Parallel
Humans themselves face the frame problem, but we solve it heuristically.
- If I move my car from the driveway to the street, I don’t wonder if gravity still works or if the Earth has exploded. I just assume continuity.
- If I tell you a story, you don’t need me to list all the things that didn’t happen — you default to background stability.
This “assumption of continuity” is deeply entropic: the world tends to persist, order tends to hold locally, and changes are sparse compared to non-changes. Our brains evolved to exploit that structure. We don’t model every non-event; we model only what shifts against the background.
Symbolic AI, however, doesn’t assume continuity unless told. That’s why it struggles. That’s why it seems unnatural. It treats every step as a potential explosion of change, unless you carefully define the boundaries of what is not.
Part 8: The Future — Can We Escape the Burden?
Where does this leave us? The hybrid methods like PDDL-INSTRUCT are promising: they combine neural generalization with symbolic rigor. But they also risk dragging us back into the old trap: brittle systems overloaded by negations.
A deeper path forward may lie in bridging entropy-based reasoning with symbolic logic:
- Use neural nets to filter out the irrelevant “nots” — to compress the world into what matters.
- Use symbolic logic only sparingly, to guarantee correctness in the narrow band where mistakes are costly.
- Think of it as entropy-guided bookkeeping: keep track of changes without drowning in the sea of non-changes.
This mirrors how life works. We don’t define ourselves by all we are not. We survive by focusing on the few things that matter most.
Conclusion: The Weight of Negation
The attempt to teach LLMs to plan shows both the progress and the limitations of AI. On one hand, we can now combine statistical learning with logical rigor to produce reliable plans. On the other, we find ourselves back in the shadow of the frame problem: the need to define not just what is, but what is not.
This tension reveals a larger truth. Intelligence — whether biological, symbolic, or artificial — is about managing entropy. Symbolic logic tries to tame entropy by exhaustive exclusion, but at the cost of brittleness. Neural systems thrive by ignoring most of the irrelevant possibilities, at the cost of occasional error. Life itself balances the two: preserving the thread of information that matters, while letting the infinite “nots” fade into the background.
In the end, perhaps intelligence is best understood not as the ability to define everything, but as the ability to not define everything. To let go of the burden of negation, and focus instead on the thin edge of change where information truly lives.
Leave a Reply