Chapter 4 gave us a rule for one bond and an inequality for one network: I ≥ 0. Now we want to see it. We will run the CLR on a grid, from random initial conditions, and watch the theorem play out. Every bond is deciding, locally. The whole is climbing, globally. Nothing is directed from above.
Before a whole grid, pause at the simplest case where bonds can influence each other: three oscillators in a chain, two bonds. Call them A–B and B–C. Bond AB sees the alignment of A and B; bond BC sees the alignment of B and C. Both run the CLR. Neither one can see the other directly. But they share node B — so whatever phase B settles into, both bonds feel.
The two bonds make independent local decisions but their outcomes are entangled. What B does reflects the tug-of-war between A and C. This is how a whole lattice's dynamics composes out of atomic bond-level rules.
Scale up. Six oscillators in a ring, six bonds, every one living. Each bond sees only its two endpoints. But every oscillator sits between two bonds, so every decision ripples.
Notice: the ring settles on a pattern. Not the same pattern every time — resets give different outcomes. But every outcome is binary. No bond lingers in the middle. Each one chooses alive or dead, and the choices compose into a coherent-enough structure for the ring's frequencies to lock.
Chapter 4 proved the Coherence Theorem globally: I(t) = dC/dt ≥ 0. But there is a more concrete way to see why it must hold, which becomes important now that many bonds are evolving at once. The global flux decomposes exactly into per-bond fluxes:
Click any symbol to see what it means.
Recall the death threshold from Chapter 4: a bond survives only if cos(Δθ) > 4/r. Below that threshold the potential has no interior well; the only stable K is zero. Above it, there's a single well at K = K*. Every bond is solving the same simple equation at equilibrium:
Click any symbol to see what it means.
When a whole network of bonds runs the CLR, each bond finds one of these two attractors for itself. The result, aggregated over thousands of bonds, is a binary field: a distribution of K values with two sharp peaks and almost nothing in between.
Here is a proper grid — 14 by 14 oscillators, each connected to its four neighbors, every bond living. Start from random initial K's and random phases. Let go. Watch the field polarize.
A note on the controlsω spread sets how much the oscillators' natural frequencies disagree — the wider the spread, the harder it is for bonds to lock. r is the regularization parameter inside the CLR, the signal-to-noise ratio that sets the death threshold cos(Δθ) > 4/r. In the full theory on the diamond lattice with vortex topology, r is not a free parameter — it is determined self-consistently (≈ 5.9) by the requirement that bulk couplings equal 16/π². In this standalone grid we expose it as a slider so you can feel how it shapes the bifurcation. Phases / bonds / both toggles what you see: phase-colored cells, the K-field as line thickness, or both at once.
What you are looking at — the particular pattern of alive bonds at the end of a run — is the network's learned connectivity. It was not imposed. It was produced by the dynamics, from initial conditions that did not know what pattern was going to emerge. The CLR found a structure consistent with the current natural frequencies and locked it in. If you hit reset and run again, a different pattern forms. If you wait long enough, each pattern stays stable — the binary nature of the attractor provides its own inertia.
A network that has found a stable K-field has, in a very literal sense, formed a memory of the frequencies and the topology it was given. Its pattern of alive bonds holds that memory even when the oscillators' motion is interrupted.
This is already remarkable. But a memory is only as useful as what you can do with it. In the next chapter we ask: what if we drive the network with a specific pattern? What if we shape its natural frequencies externally, or apply a stimulus? Does the CLR learn a specific pattern and retain it when the stimulus is removed? Does it recognize the pattern when shown it again? The answer, as you might by now suspect, is yes — and the analogy to both Chladni plates and neural memory gets very tight, very fast.