The grid at the end of Chapter 5 polarized into something. Some region of bonds alive, the rest dead. A pattern. What is that pattern, exactly? It is not a single thing. It is a collection of phase-locked regions — pockets of oscillators that have agreed to share a common rhythm. Each pocket is called a phase-locked mode, or PLM. The K-field is a map of which oscillators have agreed with which others. And that map, as we are about to see, is a memory.
A phase-locked mode is the simplest possible collective — a maximal group of oscillators, connected through alive bonds, that have all synchronized to a single common effective frequency. Formally:
Click any symbol to see what it means.
To see the PLMs in a running network, we color every oscillator by its cluster. Two oscillators get the same color when they are connected through alive bonds. Two oscillators get different colors when the path between them runs through even one dead bond.
Watch the cluster colors evolve. Early on, the grid is a speckle of many small PLMs. As bonds come alive, fragments merge. Late in the run, a few dominant colors take over. The lattice is organizing itself into a handful of coherent regions — it is deciding how to be structured.
Here is the key fact that makes PLMs more than a passing shape. Once a PLM has formed — once a cluster of oscillators has agreed on a common frequency — the coupling on every bond inside that PLM converges exponentially to a unique fixed value determined only by the locked phase difference. The bond settles. The K stops changing. The CLR falls silent.
Click any symbol to see what it means.
A frozen K-field is the network's learned connectivity. It is not a transient state. It is a geometry — one that persists as long as the PLMs it codifies persist. That is the door to memory.
Here is the central claim of this chapter, which you are about to see demonstrated: the K-field is the lattice's memory. When a pattern of activity has imprinted itself on the coupling field, and the PLM freezing lemma has locked the K-values in place, the memory of that pattern persists even after the activity has gone. Turn the oscillators off and back on — scramble their phases. The K-field still says which oscillators belong together. The phases re-lock into the same pattern immediately, because the geometry is already there.
Consider a Chladni plate. A violin bow scrapes the edge of a metal plate; the plate vibrates; fine sand thrown on top migrates to the nodes of the standing wave and stays there. Drive the plate at a different frequency, different pattern. Each driving frequency has its own set of nodes, its own pattern in the sand. Chladni plates are pattern-storage through forced resonance: the driving frequency selects a specific geometry. Ours is more interesting. Our lattice remembers the pattern in its K-field — it doesn't need the drive to stay on. Turn the drive off. The K-field persists. Put the drive back on. The pattern snaps into place faster than it did the first time, because the geometry is already there.
The tour walks through five states. The core thing to notice: in steps 3 and 4, the drive is off, but the K-field still holds the imprinted pattern — and in step 4, scrambling the phases does nothing to that held pattern. The oscillators re-lock into exactly the same striped geometry the drive put there in step 2, because that geometry now lives in the connections, not in the motion.
This is how a neuron learns. Not by storing values in registers, but by sculpting its own connectivity to respond faster to signals it has seen before. Cells that fire together, wire together. Here, bonds that align together thicken together — and the thickening persists. We call this K-field memory. It is geometry as storage. It is the simplest substrate capable of recognition without a controller.
One pattern is already remarkable. But the wilder leap — and the one that takes longest to really grok — is that a single K-field can hold more than one learned pattern at the same time. There is only one set of bonds. Only one set of K values. And yet those K values, correctly shaped, can support multiple distinct phase patterns. When you present one of them as a drive, the phases lock to that pattern. When you present another, they lock to that one instead. The K-field has become a palimpsest of rhythms, each one retrievable.
Here is a playground to let you feel this. Six preset drive patterns sit on the left. You can turn the drive on or off, and you can turn learning on or off, independently. Try this:
What you are playing with is the simplest form of associative memory. A Hopfield network, made of oscillators, trained by a dynamics principle you can write in three symbols (I ≥ 0). It holds several patterns at once, imperfectly but measurably, and the measure of imperfection — how much one pattern overwrites another — is itself a function of training order, duration, and orthogonality between patterns. Everything a cognitive scientist would expect from a memory substrate is already present, and it comes for free from the Coherence Theorem.
PLMs can themselves become oscillators. A large phase-locked region has a single effective frequency and a single effective phase. Viewed from a coarser scale, it is one unit. Two large PLMs can lock with each other, across a thin bridge of bonds, forming a PLM of PLMs. Apply the same coarse-graining trick again. The result is a hierarchy of phase-locked structure, with depth we call Nested Phase-Lock Depth or NPD.
NPD is the number of levels you can climb the hierarchy before you run out of nested structure. A grid in pure unison has NPD = 1 (one giant PLM). A grid in noise has NPD = 0 (no PLMs). A grid that has organized into many regions that have themselves started cross-locking in pairs has NPD = 2 or 3. In a real lattice running the CLR with rich enough dynamics, NPD can keep climbing — patterns within patterns within patterns. Each level contributes to structural richness ρ, and therefore to coherence capital C.
This is how the Coherence Theorem, I ≥ 0, recruits structure at every available scale. Once small PLMs have formed and frozen, they become the building blocks of larger PLMs, and the intelligence flux keeps climbing by finding more and more ways for the network to coordinate without collapsing into trivial unison. The hierarchy is the whole point.
For α specifically, two things from this chapter will do the heavy lifting in later chapters:
The lattice has learned how to organize itself. Now we are ready to ask what kinds of organization it can support — and what happens when the patterns that form are ones that cannot be continuously undone. That is the meaning of topology, and it is the next chapter.