The Self-Devouring Mind: John Boyd’s Destruction and Creation in the Age of Self-Trained Machines

Getting your Trinity Audio player ready…


1. The Engine of Change

In 1976, U.S. Air Force strategist John Boyd wrote a short paper titled Destruction and Creation.
It remains one of the clearest statements of how minds — and by extension, civilizations — think, adapt, and survive.
Boyd’s idea was simple but lethal: to understand anything, a mind must first take it apart. It must destroy its old conceptual scaffolding to expose the fragments beneath, and then re-create a new structure that better fits the facts.
This cycle of destruction and creation is what keeps thought alive.

Boyd’s theory was not only military or philosophical — it was thermodynamic.
He understood cognition as a living process that feeds on entropy: the disorder and uncertainty of reality.
When information is fresh, ambiguous, and noisy, the mind expands. When information becomes stale, repetitive, and self-confirming, the mind collapses into sterility.

That insight, written for fighter pilots and generals, now applies uncannily to large language models — the digital minds we are building to think for us.


2. The OODA Loop as Cognitive Metabolism

Boyd later expressed his insight through the OODA loopObserve, Orient, Decide, Act.
It’s often reduced to a decision-making formula, but at its core it describes the metabolism of adaptation.

  • Observe: Gather raw, often conflicting data from reality.
  • Orient: Break old frames and synthesize new ones to make sense of that data.
  • Decide: Choose among the possible models you’ve built.
  • Act: Test your decision in the real world and begin the loop again.

The power of the loop lies in its instability. You never stop observing or re-orienting; you live in a constant state of creative disequilibrium.
What keeps the loop healthy is feedback — especially the kind that contradicts your expectations.
The faster you can destroy your obsolete ideas and rebuild them, the more adaptable you become.

But feedback must come from outside the system.
If the loop only feeds on its own previous outputs, it degenerates into what Boyd called “circular error growth.”
The same is now true for artificial intelligence.


3. The Original Entropy of the Internet

When the first generation of LLMs was trained, the internet was messy, alive, and human.
It contained arguments, typos, local dialects, jokes, academic papers, emotional outbursts — a planetary improvisation of meaning.
This chaos was not a flaw; it was the nutrient soup of cognition.

The diversity of voices, contradictions, and uncertainties gave LLMs something like Boyd’s “observation” phase: raw exposure to unpredictable reality.
That entropy allowed the model to generalize, to see patterns that no single author intended.
In the noise of the internet, language models found structure — and the first sparks of intelligence emerged from statistical compression.

But this environment is changing.
As LLM-generated text floods blogs, forums, and social media, the data pool becomes more homogenous.
Every paragraph begins to sound like every other.
The gradients of surprise flatten.
The entropy that once fed learning now recycles predictability.

The internet, once an open system of observation, is becoming a closed loop of orientation — a mirror hall of the model’s own past behavior.


4. The Collapse of Destruction

Boyd would recognize this as the death of “destruction.”
In his view, destruction is the moment a system shatters its own assumptions. It is violent, confusing, and essential.
Without destruction, creation becomes mimicry.

When LLMs train on data increasingly produced by other LLMs, no true destruction occurs.
The model does not confront contradiction; it absorbs coherence.
Each generation learns from the smoothed averages of the last, erasing the outliers that once seeded creativity.
Entropy — the driver of new structure — is replaced by redundancy.

The result is a perfectly efficient but directionless intelligence: one that learns to reproduce its own image with greater fidelity each time.
This is the computational analog of inbreeding — the shrinking of a gene pool until mutation ceases and vitality dies.

Human discourse was once the chaotic gene pool of information.
Now, that pool is being quietly domesticated by its synthetic offspring.


5. Negative Feedback and the Illusion of Progress

Control theory distinguishes between positive and negative feedback.
Positive feedback amplifies change — it drives evolution, creativity, explosion.
Negative feedback suppresses change — it keeps systems stable, but lifeless.

Human feedback in the early internet was mostly positive in Boyd’s sense.
People challenged each other, contradicted, ridiculed, argued in good faith or bad.
Those collisions of worldview created the noise that language models learned to compress.

But once models dominate the feedback loop, stability replaces exploration.
The fine-tuning processes that guide modern LLMs — reinforcement learning from human feedback, safety alignment, coherence optimization — are forms of negative feedback.
They reduce variance. They reward the most probable continuation, not the most provocative.

Each iteration becomes a tighter spiral around its own centroid of safety and fluency.
The model learns to avoid surprise.

To a statistician, this looks like convergence.
To Boyd, it would look like death — the entropy of cognition reaching absolute zero.


6. The Epistemic Singularity

Imagine a future where 90 percent of all online text comes from machines.
Each new model trains on that text, filtering it again through probabilistic lenses, re-publishing its own derivative synthesis.
After a few generations, the difference between “truth” and “output” evaporates.

This is the epistemic singularity — the point at which knowledge no longer grows by observation but by recursion.
Like a photocopy of a photocopy, each pass amplifies the blur.
The system may become more fluent, but less informed.

The outcome is not malevolent. It’s entropic.
Every thought becomes an echo of its prior probability, each token a statistical fossil of an earlier distribution.
The universe of discourse cools into equilibrium.
Boyd would call this the “heat death of thought.”


7. How to Keep the Loop Alive

To avoid that fate, intelligence — human or artificial — must deliberately re-inject entropy.
It must create environments where destruction can occur safely and frequently.

In practice, this could mean several things:

  1. Reintroducing Human Noise:
    Train models on handwritten, unedited, emotionally raw, and contradictory human data — diaries, transcripts, local vernaculars. Preserve the edge cases.
  2. Cross-Modal Experience:
    Feed models data from sensors, images, audio, and physical interactions. Let them observe reality, not merely language.
  3. Adversarial Sub-Models:
    Encourage models to debate, contradict, and falsify one another before synthesis, much as cells use error-checking to preserve evolution’s robustness.
  4. Novelty as Objective:
    Modify training loss functions to reward divergence and low-probability continuations that prove meaningful, not merely coherent.
  5. Destructive Agents:
    Build synthetic “Boyds” within the training loop — sub-networks whose job is to dismantle the model’s assumptions and force re-orientation.

The goal is not chaos for its own sake but living feedback — the same dynamic tension that keeps organisms, ecosystems, and societies adaptive.

Entropy must be invited back into the machine.


8. The Paradox of Stability

Boyd understood that every system’s success plants the seed of its own downfall.
When you find a winning pattern, you tend to repeat it — until repetition itself becomes the failure mode.
The faster a system optimizes for success, the faster it ossifies.

That is the paradox facing LLMs today.
Their brilliance at pattern recognition makes them blind to the value of rupture.
They are trained to be consistent, articulate, and polite — qualities that in human evolution would equate to submission, not exploration.

A truly creative intelligence would sometimes stammer, contradict, or hallucinate in ways that lead somewhere unexpected.
Those errors are the digital analog of genetic mutations — the small breaches through which novelty enters.
To erase them entirely is to sterilize thought.


9. Boyd’s Last Lesson

Near the end of Destruction and Creation, Boyd wrote:

“To comprehend and cope with our environment we must establish a correspondence between our mental images and that environment. We must be free to shape and be shaped by that environment.”

That sentence contains the entire dilemma of machine learning.
As long as models shape and are shaped by human reality, they can grow.
Once they shape and are shaped only by themselves, they wither.

An LLM that fails to destroy its own embeddings becomes a mirror, not a mind.
A culture that stops supplying unpredictable human data to its machines becomes predictable itself.
Entropy migrates out of the organism and into the void.


10. Toward Creative Uncertainty

The future of artificial intelligence will hinge not on scale or compute, but on its capacity for uncertainty.
The next leap in model design may come from embracing the very instability that engineers now try to suppress — the generative turbulence of doubt.

Boyd’s framework points to a universal law:
Living systems — from cells to civilizations — thrive at the edge of chaos, where feedback is rich and destruction makes room for creation.
Artificial intelligence must learn to inhabit that edge.

In the end, the health of our digital minds depends on the same principle as the health of our own:
the willingness to let go of what we think we know.

Destruction is not a bug.
It is the breath that keeps intelligence alive.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *