Scaling, without going vertical.
The scaling story for this stack is not more parameters. It is more primitives. Each new addition has to clear the same bar: linear-time or better, composable with the other three, transparent, auditable, validated. The roadmap below is the next round of primitives and the operations that connect them.
Near-term — compounding the four.
- BTUT × Crystara. Use BTUT-coordinated workers as the energy explorers inside Crystara’s System 2. Linear-time coordination over a fleet of samplers is the cheapest way to fill in the blank-space map that the module crystallizer reads from.
- Crystara × NIV. NIV is the first emitted scalar. The recipe transfers: run Crystara’s H₀/H₁/H₂ pipeline on macro / monetary / energy manifolds, project stable features down to named, interpretable scalars. Each scalar publishes with its walk-forward validation out of the box.
- PDE × everything. Every primitive produces evidence. PDE is the canonical sink: submissions, approvals, chunks, embeddings, audit trail. Treating BTUT simulation traces and Crystara module genealogies as governance-grade documents — ingested through the same pipeline as the constitution — is cheap and it makes the whole stack walk-backable.
Mid-term — new primitives.
- A planner primitive. Once coordination is free, planning over coordinated agents is the next linear-time problem. The target is a narrow, transparent planner with the same auditability contract as NIV.
- A memory primitive. PDE is stateful at the document level. The next layer is stateful at the session level — a retrieval substrate that composes cleanly with Crystara modules. Still content-addressable, still auditable.
- An observer primitive. Every primitive should publish its own diagnostics as a first-class artifact. The observer primitive is the one that ingests those diagnostics and emits liveness signals across the stack.
Sovereign integration potential.
A non-trivial fraction of the customers for a substrate like this are governments, regulators, and public institutions — groups that cannot use frontier-lab APIs for legal, compliance, or sovereignty reasons. The Latent Ocean was designed to meet them where they are: open source, auditable, deployable on commodity hardware, every scalar signal published with its construction. PDE’s transparent approval log is, in effect, an FOIA-native architecture. A single Fly.io region handles a million-agent BTUT simulation. Crystara runs on 3–8 GPUs for manifold-scale datasets.
This is not a frontier lab in miniature. It is a different shape of AI infrastructure — narrow, composable, inspectable — suited for contexts where the deciding factor is provenance, not benchmark points.
Inference-time substrate alignment.
When alignment is framed as a training-time problem, the only available levers are RLHF-style. When alignment is framed as an inference-time problem — which is what the Latent Ocean assumes — the levers multiply: the approval ledger, the typed module router, the transparent signal formulas, the hybrid vector+FTS retrieval contract with public provenance. None of those levers require retraining a frontier model. They require primitives that are composable and auditable by construction.
This is where I think the serious work is going. The horizontal stack is not a personal preference; it is the only stack whose alignment surface is readable in the first place.