Sync alone is not what we want. A grid at high K all flashes in unison and the metric Iphase ≈ 1 is happy. But unison is the emptiest form of agreement. A brain that synchronized every neuron together would be a seizure.
What we actually want, in every interesting system that has ever existed, is coordinated complexity. A Bach fugue holds many independent voices in a tension that resolves and re-opens. A living cell coordinates ten thousand chemistries without flattening them into one. A thought is many ideas locked together in a structure that could come apart, but doesn't. These are not unison states. They are something richer and harder to name.
The network needs to be aligned enough to function as one thing. It also needs to be structured enough to hold more than one pattern at a time. Call these two requirements Iphase and ρ.
Iphase we already have — the average alignment across all bonds, from Chapter 2. It answers: are we in sync?
ρ (Greek lowercase rho) is new. Call it structural richness. It answers: how much is the network actually doing? A network holding many distinct phase patterns simultaneously — where regions lock with each other in nontrivial ways — has high ρ. A network in trivial unison has low ρ, because a single pattern is the simplest pattern there is. A network in noise also has low ρ, because noise is not structure, it is the absence of structure.
A note on ρThe full formula for structural richness, written out in the paper, involves four components: how many bonds are alive, their mean coupling, the spectral bandwidth of the network's connectivity, and the depth of its nested phase-lock hierarchy — what we'll get to in Chapter 6 when we meet phase-locked modes proper. In this chapter we use a simplified proxy that captures the essential trade-off with Iphase. The full picture, and its connection to memory and geometry, is coming.
Neither number alone is what we want. Their product, however, is:
Click each symbol to see what it means.
Here is the same network at three different values of K, running simultaneously. Same equation, same starting conditions — only the coupling strength differs. Watch the readouts underneath each panel.
Here is the full story in one figure. A live grid, a slider for K, and a real-time readout of Iphase, ρ, and their product C. On the right, a phase portrait with Iphase along one axis and ρ along the other. The dot is where the current state sits. The curved contours are lines of equal C. Your job is to find the contour with the largest C.
The peak of C is not at the highest K. It is somewhere in the middle. The network that maximizes coherence capital is not the most tightly synchronized network. It is the one that has found the point of maximum tension between alignment and structure — enough sync to coordinate, enough disagreement to do anything interesting.
We have been sliding K by hand. But imagine the system itself had to decide, at every moment, what its couplings should be, and that it had access to only local information. Imagine the only rule was: nudge each coupling in the direction that increases C.
That system would be doing something different from Kuramoto. It would not be solving for synchronization. It would be climbing a surface — a surface defined by coherence capital, with one dimension per bond, and the peak somewhere deep in its interior. The climb itself would be the learning. The climb itself would be the intelligence.
We have a name for this climb: coherence ascent. And the local rule that performs it, derived from nothing more than taking the time derivative of C and asking each bond what it should do to help, is the Coherence Learning Rule. That is the next chapter. For now, hold onto this: every interesting physical system — crystal, embryo, brain, fugue — is, in a precise sense, trying to do this.