← Back to Papers
In Preparation

Quotient Geometry of Statistical Manifolds Under Dimensional Collapse

Information Geometry (in preparation) (2025)

What's this about?

When a statistical manifold is observed through a lower-dimensional map, what geometric structure survives? This paper provides the general quotient-geometric framework that makes the minimal embedding threshold inevitable.

Quotient Geometry of Statistical Manifolds Under Dimensional Collapse

The key insight: collapse maps foliate the manifold into fibers along which the observed Fisher metric degenerates. Parameters on the same fiber produce identical observed distributions—they're observationally indistinguishable, even if the full distributions differ.

The Fisher metric descends to the quotient if and only if it's bundle-like (constant along fibers on horizontal vectors). Totally geodesic fibers provide a sufficient condition. This settles when dimensional reduction preserves statistical structure versus when it destroys it.

The covering number bounds quantify what's lost: N(ε)εrN(\varepsilon) \sim \varepsilon^{-r} where rr is the projection rank. Lower-rank projections yield coarser quotients—fewer distinguishable classes at any resolution. The k=3k=3 threshold from the companion paper emerges as a special case: below k=3k=3, phase-preserving projections drop from r=2r=2 to r=1r=1 scaling.

This forms a two-paper program with the minimal embedding result: Paper 1 gives the crisp threshold; Paper 2 provides the geometric machinery.

Why it matters

Every time you compress information—whether it's a neural network bottleneck, a summary statistic, or your brain creating a memory—you're performing dimensional collapse. This paper answers: what survives? The answer isn't "it depends on the details." There are hard geometric constraints. Some information is genuinely lost (parameters become indistinguishable). Some structure survives (when the geometry is "nice" in a precise sense). And the amount you can distinguish at any resolution follows predictable scaling laws. This matters for AI interpretability (why we can't always reconstruct what a network "knows"), for neuroscience (why high-D brain states produce low-D behavior), and for any science that uses dimensionality reduction.

Key findings

  • Fiber Structure Theorem: collapse maps foliate M into observationally non-identifiable fibers

  • Quotient Metric Theorem: Fisher metric descends iff bundle-like; totally geodesic is sufficient

  • Covering bounds: N(ε)εrN(\varepsilon) \sim \varepsilon^{-r} quantifies distinguishability at finite resolution

  • Minimal embedding (k=3k=3) emerges as dimension drop from r=2r=2 to r=1r=1

Citation

Todd, I. (2025). Quotient Geometry of Statistical Manifolds Under Dimensional Collapse. Information Geometry (in preparation).

Workflow: Claude Code with Opus 4.5 (Anthropic) for drafting; GPT-5.2 (OpenAI) and Gemini 3 Pro (Google) for cross-paper consistency review. Author reviewed all content and takes full responsibility.