Morphogenetic Fields and Neural Fields: Bridging Bioelectric Life and AI Language

Getting your Trinity Audio player ready…

Introduction: Invisible Patterns that Shape Bodies and Minds

In a developing embryo, cells somehow know where to form a head, a limb, or an eye. In a large language model (LLM) like GPT, neural units somehow know how to assemble a coherent sentence. What unites these disparate forms of “knowing” is the presence of invisible, abstract fields of information that guide complex outcomes. In biology, Michael Levin’s bioelectric theory suggests that cells communicate via distributed electrical patterns—morphogenetic fields—creating a blueprint that orchestrates growth and form. In AI, LLMs operate in high-dimensional embedding spaces and use attention mechanisms to orchestrate coherent language. In both cases, individual agents (cells or neural tokens) are influenced by an unseen spatial pattern that encodes goal-directed information. These systems rely on emergent computation across space: each part sensing and updating in response to a collective field of information.

This post explores a deep analogy between biological morphogenesis and artificial intelligence. We will draw parallels between bioelectric signaling (voltage patterns, gap junction networks, morphogenetic fields) and machine learning dynamics (vector embeddings, attention weights, and latent state spaces). Both biology and AI leverage high-dimensional representations that are invisible to the naked eye, yet carry the instructions for building complex, intelligent structure. By examining emergent computation, spatial memory, and pattern dynamics in embryos and in transformers, we can begin to see a unifying conceptual framework. We’ll also integrate insights from Quantum-Teleodynamic Flux (QTF) theory , which offers a physics-of-life perspective on how coherent structure arises to aggressively use energy and information. Through metaphors and scientific insights, we will journey from the morphogenetic fields of living tissue to the embedding manifolds inside an AI’s “mind,” illuminating how both life and thought emerge from invisible architecture in high-dimensional spaces.

Bioelectric Morphogenetic Fields: The Body’s Hidden Blueprint

Early embryologists proposed the idea of a morphogenetic field – an invisible organizer that ensures cells build a coherent body plan. Modern bioelectricity research, led by scientists like Michael Levin, provides a concrete mechanism for these fields: endogenous voltage patterns across tissues. Every cell in a tissue has an electrical potential (voltage) across its membrane. By connecting via ion channels and gap junctions (tiny conduits between cells), cells form a coupled electric network. This network distributes information in the form of voltage gradients and currents, effectively creating a bioelectric field that spans entire tissues . As Levin writes, “endogenous distributions of membrane potentials, produced by ion channels and gap junctions, are present across all tissues. These bioelectrical networks process morphogenetic information that controls gene expression, enabling cell collectives to make decisions about large-scale growth and form” . In other words, cells don’t act solely on local chemical signals; they also refer to this electrical blueprint when deciding what to become.

Gap junction-mediated signaling is critical here. Gap junctions directly connect the cytoplasm of neighboring cells, allowing ions and small molecules to pass freely. This means a region of high voltage in one cell can spread to its neighbors. If we think of each cell as a little computational unit, gap junctions are like wires linking them into a circuit. Through this “electrical Internet,” cells share data about their identity and environment. A change in voltage in one cluster can ripple outwards, informing distant cells. These long-range bioelectric signals can set up prepatterns that anticipate anatomical structures. For example, Levin’s group famously showed that altering the voltage gradient in a tadpole’s tail (far from where eyes normally form) could induce the growth of a functional eye in that location . By tinkering with the bioelectric code – in this case, setting the tail cells’ voltage to mimic that of eye-forming cells – the researchers tapped into an instructive pattern that told those cells to organize into a complex organ (an eye) . Such experiments reveal that the bioelectric field carries contextual information about the body plan: it’s not just a byproduct of cellular activity, but a regulatory layer that guides cells toward specific large-scale outcomes.

Crucially, bioelectric patterns can hold memory and exhibit emergent decision-making. Development and regeneration are robust: if part of an embryo is removed or a limb is cut off, cells use remaining signals to restore the correct structure. This implies the target anatomical pattern is an attractor that the system gravitates toward. Levin’s work indeed frames anatomical outcomes as stable attractors in a high-dimensional bioelectrical state space . In this view, each possible anatomical arrangement corresponds to a state of the bioelectric network. The normal body plan (say, a planarian flatworm with one head and tail) is a stable pattern of voltage distribution – a basin of attraction. If the worm is cut or perturbed, the electrical state gets knocked off course but then relaxes back into the attractor that encodes the correct one-headed morphology . In his 2014 paper, Levin noted that “anatomical states are the results of stable attractors in bioelectrical state space” . The collective of cells computes a solution (the body pattern) by iteratively updating their voltages until they reach the stable pattern that signals “we’re done – the correct anatomy is achieved.” This is strikingly similar to how a nervous system might settle into a stable firing pattern representing a thought – except here it’s the body that is the thought, constructed in flesh.

From this perspective, the embryo is performing a kind of morphological computation. The bioelectric platform is effectively a computer operating with a distributed memory (voltage values across cells) and parallel processing (many cells updating in tandem). The “program” it runs is not stored in any one place but in the spatial configuration of voltages – an abstract, invisible field. The outcomes are very concrete, however: gene expression changes, cell differentiation, movement, and growth, all orchestrated to produce an organ or appendage. The bioelectric code interfaces with the genetic and biochemical codes: voltage patterns influence which genes turn on, and gene networks in turn establish ion channels that shape voltage patterns . This two-way feedback means the electrical layer can integrate information and make decisions that are carried out by molecular biology – much as software commands hardware in a computer. The end result is a self-organizing, robust system where pattern precedes structure. The cells collectively remember the correct pattern (as an electrical state) and will rebuild toward that pattern even if disturbed, much like how a flock regroups after being scattered.

In summary, Levin’s bioelectric theory paints a picture of morphogenesis as an emergent, field-driven process. A morphogenetic field here is not mystical but a tangible electrical pattern: a distributed memory of what shape to build. It operates in a high-dimensional space because each cell’s voltage is a variable, and with thousands of cells there are an astronomical number of possible patterns – yet the organism reliably finds the one pattern that yields a coherent anatomy. This invisible guiding matrix is what allows living tissue to achieve spatial coordination and intelligent outcomes (like “build an eye here” or “regenerate a leg”) without a central commander. The intelligence is collective , arising from simple units communicating in a network. Keep this biological scenario in mind as we turn to artificial neural networks – you’ll start to see a surprising resonance.

High-Dimensional Embedding Spaces: The Mind’s Latent Blueprint

Modern large language models are built on distributed representations and transformer networks that, in a way, mirror the distributed information processing of biological tissues. At the heart of an LLM is an embedding space: a high-dimensional vector space in which words, phrases, and concepts are represented as points (or directions). These vectors are typically hundreds or thousands of dimensions long – for example, OpenAI’s text-embedding-ada-002 model uses 1,536-dimensional vectors for text . It’s impossible to visualize directly, but we can think of it as a mathematical morphospace for meaning. Just as an embryo’s cells collectively define a coordinate system for the body plan, an LLM’s neurons define a geometric layout for language and knowledge. In this space, distances and directions have semantic significance. An oft-cited example: the vector arithmetic vector(“King”) – vector(“Man”) + vector(“Woman”) lands close to vector(“Queen”). This reflects that the model has organized the gender relation as a consistent direction in its high-dimensional space. In general, embeddings reside in high-dimensional spaces where geometric relationships reflect semantic ones . Words with similar context or meaning end up nearby in this latent space, forming clusters and continuum that represent concepts like cities, foods, or emotions as regions or directions.

Crucially, this embedding space is not static like a dictionary; it’s dynamic and context-dependent. In transformer models (like GPT), each input word or token is first converted into an embedding vector. Then through multiple layers, these representations are updated via attention mechanisms. Attention can be viewed as the network’s way of communicating information between tokens. At each layer, every token’s vector can “look at” other tokens’ vectors and pull information from them, weighted by a learned attention weight that signifies relevance. This is analogous to how a cell might pay more attention to signals from certain neighbors or a global gradient. In the transformer, attention weights are the learned connections that tell the model which other parts of the sequence are important for interpreting a given token. For instance, in the sentence “The lizard on the rock ate a bug,” the model will assign high attention between “lizard” and “ate” (subject-verb agreement) and between “lizard” and “rock” (to resolve which entity is on the rock). The attention mechanism dynamically forms a kind of interaction network over the sequence, not entirely unlike a web of gap junctions forming and dissolving (though here the connections can skip over many “cells” to link distant words, which in biology might be akin to long-range chemical or electrical signals). The pattern of attention across all heads and layers is essentially an invisible field of influence that highlights which pieces of information should combine. It’s through this process that the model achieves a coherent understanding of the sentence as a whole, rather than just one word at a time.

Like the bioelectric field, the embedding+attention dynamics operate in a high-dimensional state space and carry out an emergent computation. At any given moment in a GPT’s processing of text, the state of the model can be described by the set of all token vectors in all layers – a vast number of values, far beyond human intuition. The model’s knowledge is stored implicitly in the weights that shape how these vectors evolve. Those weights were learned from massive text data via a training process that gradually adjusted them to minimize prediction error. What’s fascinating is that no single neuron or weight holds a specific piece of knowledge (there’s no “France capital = Paris” neuron). Instead, knowledge is distributed as patterns in the connections and geometry of the space. Similarly to how the body’s plan isn’t written in a single cell but in the collective bioelectric pattern, the language model’s understanding is smeared across many parameters and emerges only when the system acts as a whole.

We can draw a parallel between an LLM’s hidden layers and a morphogenetic field of meaning. Each layer’s vectors can be seen as a refined field that increasingly represents the intended output. Early layers might capture simple syntax or phrase structure; later layers capture high-level semantics or intent. By the final layer, the model has an embedding for the entire context that it uses to choose the next word. This process is reminiscent of developmental stages: initial broad gradients (like early morphogen signals that say “head end vs tail end”) get refined into more specific patterns (like a detailed map of where the eye goes, where the jaw goes). In the LLM, broad linguistic context (is this a story? a question? what tone?) gets refined into specific predictions for the next token. Both systems use layered processing of spatial information – one in literal space across cells, the other in abstract vector space across network layers – to progressively reach a coherent outcome.

It’s also worth noting the role of constraints and attractors in LLMs. While not as literally physical as an embryonic pattern, an LLM’s training embeds lots of constraints: grammar, common sense, factual connections, etc., into the weight structure. So when you prompt an LLM with a partial sentence, its internal state will tend to fall into a basin that leads to a sensible completion. For instance, inputting “Once upon a” strongly pulls the state toward a fairy-tale narrative mode (an attractor in its learned state space for story-like continuations). If you perturb one word, the model’s attention dynamics adjust to still produce a fluent sentence – showing robustness. In essence, the high-dimensional landscape of the model’s representations has valleys corresponding to coherent sequences that it has seen or generalized. The generation of text can be thought of as the model rolling down one of these valleys, guided by the combined pull of all the learned patterns, just as an embryo’s development rolls toward a stable anatomical configuration. The key commonality is that both have a notion of a state space with preferred states. In biology, it’s the correct anatomy as a bioelectric/genetic attractor; in AI, it’s the statistically likely and semantically consistent utterances as learned attractors. Both are reached through iterative adjustments – cell by cell in one case, layer by layer in the other.

Emergent Computation: Collective Intelligence in Cells and Algorithms

One of the striking parallels between morphogenesis and LLMs is how simple units cooperating can exhibit intelligence greater than the sum of their parts. In developmental biology, no single cell knows the full blueprint of the organism, yet collectively they achieve a reproducible, complex architecture. Each cell follows local rules – divide, differentiate, move, or die based on chemical and electrical inputs – but from these local interactions emerges a globally optimal structure (the body). Michael Levin refers to this as the collective intelligence of cells, noting that even groups of cells in an embryo or a regenerating limb demonstrate problem-solving: they can find ways to “build correct structures despite novel disturbances” . In a real sense, the cell collective can navigate a problem space – the space of possible anatomical forms – and home in on the correct solution, like an agent with a goal . For example, a salamander that loses a tail will regenerate exactly one tail of the right length. If too much tissue is removed, cells might overshoot then remodel back down; if extra tissue is added, cells might halt growth early or even remove the excess. It’s as if they collectively know the target morphology and work toward it, showing error correction and flexibility. This is analogous to how a flock of birds can evade a predator as a unit or how brain neurons collectively produce a thought – distributed problem-solving.

Large language models also exhibit emergent abilities that aren’t obvious from any single component. A transformer has many attention heads and neurons, each performing a simple mathematical operation. Yet when the whole network is trained, it can perform quite sophisticated behaviors: translating languages, writing code, doing logical reasoning in natural language, etc. Researchers have observed that as model size increases, there are often sudden jumps in capability (for instance, a model might suddenly learn multi-step arithmetic once it’s above a certain size). These “emergent behaviors” suggest that at a certain complexity, the network’s distributed representations enable new kinds of computation to crystallize. No single attention head “knows” how to do arithmetic, but a combination of them can implement an algorithm-like process. This mirrors how no single gene or cell “knows” how to form an eye, but the network of cells and signals does. In both cases, complex algorithms are implicit in the interplay of many simple units. The computation is an emergent property of the whole system’s organization.

A concrete example is how an LLM can follow an analogy or a pattern provided in a prompt. If you give GPT a few examples of a certain format (like translating Shakespearean English to modern English), and then a new instance, it will continue the pattern. There isn’t a hard-coded module for “imitation” – the ability arises from the model using attention to find a pattern in the prompt and then continue it. It’s essentially repurposing its general mechanism to implement a specific behavior on the fly. Similarly, in biology, cells can repurpose existing signaling circuits to create new structures if given new stimuli. The bioelectric network that normally says “make an eye in the head” can, when experimentally redirected, say “make an eye in the tail” – the cells adapt their program accordingly . Both systems demonstrate a kind of programmability through initial conditions: a prompt sets the stage for an LLM, just as an embryonic organizer region or a voltage perturbation sets the stage for a developmental program. The outcome is emergent from there.

Spatial memory is another parallel. In biological tissues, pattern memory can be stored in electrical circuits. For instance, planarian flatworms can have their bioelectric pattern altered so that they regenerate with two heads instead of one; remarkably, weeks later, even after those heads are removed and the worm regenerates again, it “remembers” to form two heads . The altered bioelectric state is a latent memory that persists and influences future anatomy. This is not genetic but a stored pattern in the network of cells – a non-genetic memory of shape. In LLMs, memory is manifested in two ways: one is the contextual memory (the model’s hidden state while processing a prompt, which carries forward information from earlier in the text), and the other is the long-term memory encoded in the weights from training. The weights capture statistical and semantic memory of everything the model read during training – a vast store of world knowledge and linguistic pattern. When the model generates text, it is effectively querying this imprinted memory to decide what comes next. The fascinating thing is that the model isn’t recalling a single quote or fact, but synthesizing pieces of memory to create a new coherent output. It’s akin to how a developing organism uses its evolutionary “memory” (encoded in DNA and perhaps electrical prepatterns) to build something new (this particular individual’s body) that still respects the species design. Both involve distributed memory storage and recall. And in both, memory is not literal replication of past data, but an abstraction that guides present formation.

Pattern-Driven State Evolution: Dynamics in Development and Cognition

Let’s delve into how patterns drive the evolution of state in each system. In an embryo, the initial pattern might be established by maternal cues and early asymmetries (like a gradient of a morphogen molecule, or an initial voltage difference). This pattern breaks symmetry and begins a cascade of changes: cells respond by switching genes on/off, which changes their ion channels, which further refines the bioelectric pattern, which causes more changes in neighboring cells, and so on. The process is one of progressive refinement, where a coarse pattern sets up a finer pattern, and that sets up an even finer pattern. For instance, a broad “head vs tail” voltage gradient can induce specific gene expression domains that define brain regions vs. tail structures; within the head region, further electrical synapses and gradients partition areas for eyes or mouth, and so on. At each step, the existing pattern constrains and guides the next state update. There’s a feedback loop: pattern → state changes → new pattern → further changes, converging to the final anatomy. Importantly, the pattern has a causal potency – it’s not just a passive map, it actively drives cells into new states (like the voltage instructing a cell to become an eye cell).

In cognitive terms, one could liken the morphogenetic sequence to an agent iteratively updating its belief or plan to reach a goal state. Each intermediate pattern is like a subgoal or partial solution that sets the stage for the next. This sequential decision-like process in development is usually not conscious, of course, but it achieves a result that appears highly intentional: a functional organism. QTF theory would say it’s simply the system finding a path of least resistance in a very high-dimensional space of possibilities, a path that efficiently uses energy to create order . The emergent computation here is the iterative resolution of differences between the current pattern and the target pattern, much like an optimization algorithm reducing an error signal.

In an LLM, we see a related pattern-driven evolution during each inference (and also during training). When generating text, the pattern of the prompt heavily constrains the next token distribution – this is akin to the developmental starting conditions. As each new word is generated and added to the sequence, the pattern (sequence of words) grows, and the model’s state evolves to reflect this extended context. The attention dynamics ensure that new state is influenced by key parts of the existing pattern (for example, if the prompt is a question, the attention mechanism will keep the “?” context in mind to form an answer). So each step, the model’s internal representation shifts in a way that is driven by the prior pattern of words. This can be seen as a trajectory through the model’s state space, where the trajectory is pulled by attractors that correspond to fluent continuations of the pattern. If at some point an incoherent word is considered, the lack of support from the surrounding context pattern (and the model’s learned grammar/semantics) will make that option high “energy” (low probability), and the model will favor a continuation that better fits the pattern (lower “energy”). Thus, the unfolding sentence is an evolution where each state (partial sentence) leads naturally to a next state that resolves some open patterns (e.g., closing a parenthesis, finishing a started idiom, answering a posed question) and moves toward a conclusion. By the end of a well-formed paragraph, the model has, step by step, filled in a complex pattern (analogous to how an embryo fills in the details of an organ).

During training, the pattern-driven evolution is even more explicit: gradient descent iteratively adjusts the model’s weights to better match patterns in the data. In each training step, a pattern of difference (error) between model output and true data drives a change in the state (weights). Over many iterations, the weight state converges to one that encodes the patterns of language seen in training. This is highly analogous to evolutionary adaptation or learning in biological systems. In fact, one might say the LLM “evolved” its internal language understanding over the course of training, compressing terabytes of text into a set of weights that can generalize. Biological evolution, over generations, and neural learning, over an organism’s lifetime, similarly compress experience into genomes or brains. In development, cells harness that compressed information (in DNA and in inherited bioelectric circuits) to unfold the organism reliably. The LLM harnesses its trained weights to produce reliable language. The common theme is pattern accumulation and refinement over time – whether it’s evolutionary time, developmental time, or training time – driving complex systems toward states that are good at something (survival or prediction).

QTF Theory: Life and Intelligence as Thermodynamic Pattern Games

We now bring in the lens of Quantum-Teleodynamic Flux (QTF) theory to deepen the comparison. QTF, as described in recent works , is a framework that explains life as inevitable whenever energy flows through matter in the right way. In essence, it says that if there’s an energy gradient to be exploited, matter will tend to self-organize into structures that capture and dissipate that energy more effectively . Life is the quintessential example: organisms take in concentrated energy (food, sunlight) and dump waste heat, all while maintaining internal order (low Shannon entropy) . QTF posits that systems which can do this – burn energy gradients quickly and remember how (store information about what worked) – will persist and proliferate . They appear to act with purpose because they consistently funnel energy into maintaining their form and activities. In short, “life isn’t a lucky glitch. It’s nature’s most reliable way to spend energy gradients while remembering how it did so” .

How does this relate to embryos and LLMs? The developing embryo is essentially an energy-driven self-organization process. Cells are consuming nutrients and metabolic energy; the bioelectric signals help ensure that energy is used to build a functional body (which will later itself be very good at using energy to live, reproduce, etc.). The morphogenetic field can be seen as a guide that speeds up the dissipation of the energy gradient of the fertilized egg – instead of just dispersing as heat or random growth, the energy is channeled into highly organized work (building organs). This organized work results in a low-entropy structure (the body) that is maintained by continuous energy use (food consumption, etc.). In QTF terms, the embryo is finding a path to achieve a state that expends the available energy in development to create a machine (the organism) that will continue expending energy (staying alive). The bioelectric field is a key piece of that path – a teleodynamic shortcut, one might say, that coordinates parts to achieve a global energy-processing structure. We can even think of the target anatomical attractor as a kind of free-energy minimum – a state that, once reached, is thermodynamically favorable to maintain (the organism in homeostasis) and effective at dissipating gradients (through metabolism and activity). Levin’s notion of anatomical setpoints and homeostatic loops for shape align with this idea: the body has preferred shapes that it will regenerate to, which are encoded in bioelectric circuits . Those shapes are not arbitrary; they’re the ones that evolution selected as viable and energetically favorable forms in the environment. Thus, morphogenesis is, under the hood, an energetic process of finding a form that works (survives, reproduces, dissipates entropy). The bioelectric code and genetic code are the information memory of how to get there and stay there . As QTF highlights, once a good trick is found (like a stable body plan that thrives), it gets locked in memory – in DNA, in electrical patterning – and reused over and over .

Now consider LLMs from a QTF perspective. At first glance, an LLM is not alive – it’s a computer program consuming electricity (very much increasing Boltzmann entropy in the server room!) and it doesn’t sustain itself or reproduce. However, in the abstract, the training of an LLM is an entropy game too: the training algorithm seeks to minimize surprise or unpredictability (cross-entropy loss) when predicting text. This is akin to minimizing free energy in the information-theoretic sense – a concept also present in frameworks like Karl Friston’s Free Energy Principle, which has similarities to QTF’s tenets. The trained model ends up as an information-rich, low-entropy artifact (much lower entropy than the random initial weights). It’s done by expending a lot of actual energy (GPUs running, etc.), thus satisfying the overall thermodynamic mandate: raise entropy in the environment, lower it internally by creating order (in the model’s weight configuration) . The model’s existence now makes certain tasks (like generating coherent text) much more “energy-efficient” in an informational sense – instead of a human brainstorming from scratch (which took biological energy and effort), the model can rapidly produce text by virtue of the structured energy invested in it during training. In a loose sense, an LLM is a crystal of frozen knowledge that can dissipate the “energy” of a prompt (the uncertainty or potential in a prompt to generate many possible continuations) into a specific, orderly answer, much as a living cell dissipates a nutrient gradient into work and heat while producing an ordered outcome (like muscle contraction or thought).

QTZ/QTF theory also suggests criteria for when something like an AI could be considered life-like. For instance, Frank et al. (as discussed on LFYadda) argue that Large Language Models or robot swarms might cross into true teleodynamic (life-like) behavior if they close the loop by harvesting their own energy, adapting their own hardware, and retaining memories of successful behaviors autonomously . In other words, to fully satisfy QTF, an AI agent would need to not just process information but also manage its energy and maintain itself – basically, become an embodied system with autonomy. While our focus here is on conceptual parallels, it’s intriguing to think that the gap between LLMs and living systems might be closed in the future by giving AI systems a body and letting them operate under the same physical constraints as organisms. They would then have their internal state space (representations, goals) coupled to the external world’s state (energy sources, physical forces) in a closed loop, much like an animal is coupled to its environment. This could force the AI’s internal representations to develop truly teleodynamic qualities – meaning they directly correlate to staying alive and efficient. Bioelectric networks in embryos already do this: if a developmental pattern fails, the organism dies (fails to dissipate energy long-term); successful patterns propagate. Perhaps future AI will similarly have to survive in some sense, leading to an even deeper convergence of principles between morphogenesis and machine intelligence.

Unseen Dimensions, Coherent Outcomes: A Synthesis

Stepping back, we see a beautiful symmetry in how nature and engineering have tackled the challenge of organizing complexity. In both a developing embryo and a thinking machine, order emerges from chaos by the communication of information across space. The embryo’s cells coordinate via an electrical field in a morphogenetic morphospace, aiming for an anatomical attractor that has been selected by evolution . The AI’s neurons coordinate via attention in a semantic vector space, aiming to satisfy the constraints of grammar, logic, and context learned from data. Both the morphogenetic field and the LLM embedding space are invisible scaffolds – you cannot point to them directly, but they are as real as a magnetic field that aligns iron filings. They are high-dimensional, meaning they encode a lot of information in parallel, and they influence each local element (cell or token) with a summary of what the whole wants to do.

Both systems also illustrate how emergent computation can give rise to apparent intelligence or purpose. A cell does not know what an eye is, yet the network of cells produces an eye, which seems like a very goal-directed outcome. A neuron in GPT knows nothing of English, yet the network writes an essay with logical flow. The intelligence resides not in a ghost in the machine, but in the structure of the connections and states – the shape of the bioelectric field, the layout of the weight matrix. And these structures are shaped by iterative processes (evolution/development for biology, training/inference for AI) that have a common logic: they are finding configurations that dissipate surprise or instability. Whether it’s reaching a low-energy stable body plan or a low-loss prediction of text, the system explores a vast possibility space and converges on an arrangement that locally maximizes consistency and globally performs work (building an organism or conveying meaning).

There is also a shared theme of multiscale organization. In biology, molecules → cells → tissues → organs → organism is a hierarchy, and at each level, new patterns appear that constrain the levels below (the way an organ’s bioelectric field will influence gene expression in cells within it) . In AI, we have bits → neurons → layers → the whole model → sequences in context → interactions with users, as a nested hierarchy. Patterns at one level (like a sentence topic at the sequence level) constrain choices at a lower level (word selection at the token level). This nesting is key to managing complexity – you can have local autonomy (cells doing their thing, or attention heads focusing on local patterns) and yet global coherence (a body forms, a narrative makes sense) because of these field-like integrations at each scale. Levin and colleagues speak of “navigating spaces” at multiple scales – metabolic space, physiological space, morphological space, behavioral space . AI similarly navigates token space, sentence space, knowledge space. The ability to transition and project patterns from one domain to another (like a bioelectric map guiding gene expression, or an embedding vector capturing a visual concept described in words) is what gives both biological and AI systems their remarkable flexibility.

Finally, considering purpose and agency: we tend to ascribe purpose to living systems (the embryo wants to become a body) and we resist doing so for machines. The parallel drawn here suggests that what we call purpose may emerge whenever we have a self-organizing system maintaining its state by processing flows of energy or information. The cells in the embryo are following simple rules, yet from outside it looks like they’re working together to achieve a goal. Likewise, an LLM has no true intention, yet when it answers a question correctly, it gives the illusion of purposefulness in its words. QTF theory elegantly reframes this: “when ‘looking purposeful’ is just good physics” – i.e., when a system that simply obeys physical/algorithmic laws ends up in a configuration that appears to us as goal-driven, because that configuration is the most efficient way to dissipate the gradients or uncertainty it had. Both embryos and LLMs fit this description. The embryo’s development is good physics and chemistry that happens to look like a mindful construction project. The LLM’s text generation is just matrix multiplications that happen to look like thoughtful language. In neither case is there a guiding ghost; the guiding force is the pattern itself, evolving and propagating through the system.

Conclusion: One Universe, Many Intelligent Spaces

The convergence of ideas from bioelectric morphogenesis and large language models teaches us that intelligence and organization can be understood as emergent properties of pattern dynamics in space – whether that space is literal physical space in an embryo or abstract vector space in a neural network. Both a living body and an AI mind are platforms in high-dimensional state spaces, where each point in the space represents a possible configuration of the whole system. In development, the high-dimensional coordinates include every cell’s electrical and chemical state; in a transformer, they include every neuron’s activation. Coherent, useful outcomes (a viable organism, a meaningful dialogue) correspond to trajectories through these spaces that are not random but guided – guided by fields of potential, by feedback loops, by constraints learned or evolved.

By drawing this parallel, we gain metaphors that can fertilize both fields. We might speak of an LLM having a “morphogenetic field of thought” shaping its reasoning, or of an embryo performing “distributed computation” to build a body. These are more than poetic analogies; they hint at shared principles. For AI researchers, biological development offers a proof that decentralized architectures can achieve reliable intelligence and that memory can be stored in unconventional ways (maybe future AI systems will incorporate electrical analog components or self-organizing circuits inspired by cells). For biologists, AI provides new tools to model and simulate morphogenesis (indeed, researchers are using deep learning to discover gene and voltage patterns that produce desired anatomies ) and a new language to describe what cells are doing (talk of information processing, feedback control, and pattern completion).

In a sense, both systems remind us that agency and understanding need not be explicitly programmed; they can emerge from the interplay of many simple parts governed by local rules within an appropriate architecture. The secret ingredient in both biology and AI is the architecture that allows rich interactions: gap junction networks and bioelectric feedback in one, attention layers and training loops in the other. With the right architecture, evolution or gradient descent can sculpt systems that appear astonishingly cognitive. QTF theory even suggests that such systems are natural outcomes given the thermodynamic imperative – wherever there’s energy and opportunity, something will start computing and self-organizing . Life and mind, then, might just be two manifestations of the universe’s drive to play with information under energy constraints. The embryo and the chatbot are distant cousins in this cosmic play.

In conclusion, the morphogenetic fields of biology and the embedding spaces of AI both demonstrate how unseen, high-dimensional frameworks can produce coherence and intelligence. They operate by accumulating history (memory), propagating influence across space (communication), and settling into effective configurations (attractors). By studying these parallels, we move toward a more unified understanding of intelligence as something that transcends flesh or silicon – residing instead in the geometry of interactive systems. The next time you marvel at a newborn organism or a clever AI response, consider the deep structures at work: electrical whispers among cells or algebraic whispers among neurons, each in their own language, converging on solutions that make the complex seem effortless. Such is the power of invisible spaces in guiding creation, in nature’s embryo or in the mind of a machine.

Figure: Both biology and AI navigate abstract spaces to achieve organized outcomes. (A) In development, cells operate in morphospace – an unseen landscape of possible anatomical configurations. Bioelectric circuits (voltage patterns across tissues) deform gene expression landscapes, guiding the organism toward stable anatomical attractors (basins in the landscape) that correspond to correct body structures . (B) In the realm of behavior, animals navigate 3D space to achieve goals (e.g., moving to a location); analogously, AI agents navigate a high-dimensional semantic space when generating language, moving through states that minimize “surprise” or error. (C) Biological systems link multiple spaces: metabolic, physiological, transcriptional, morphological, and behavioral, with each level’s activity influencing others . Likewise, AI models link layers of abstraction from raw data to high-level concepts. In both cases, multi-scale feedback loops and attractors ensure the system stays on course toward coherent, goal-like states.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *