Mastodon Politics, Power, and Science

Sunday, April 26, 2026

How Planck Accidentally Found the Way Back to Newton

The Detour and the Bridge:

How Physics Mistook a Bookkeeping Constant for a Discovery,

and How Planck Accidentally Found the Way Back to Newton

J. Rogers, SE Ohio

Abstract

Newton’s original statement of universal gravitation was a pure proportionality: force scales with the product of masses and inversely with the square of distance. No units. No constants. Just ratios in proportion to ratios. That statement was physically complete. The gravitational constant G was not a discovery about the universe — it was inserted a century and a half later to convert Newton’s dimensionless proportionality into an equation that balances in human unit systems. Physics then told a story in which G represented a deepening of Newton, a quantification of something Newton had only sketched. That story is wrong.

In 1899 Max Planck, working on an unrelated problem in blackbody radiation, stumbled onto three combinations of h, c, and G that produce units of mass, length, and time independent of human convention. He recognized them as universal and called them natural units. But Planck did not see what his discovery actually was. He had found the exact Jacobians — the conversion factors — that translate Newton’s pure unit-free proportions into any human unit chart and back out again without losing anything. He built the bridge back to Newton without knowing the bridge existed or what it connected.

We show that G is not a constant of nature but a composed Jacobian: G = Fₚ · (lₚ/mₚ)², where Fₚ, lₚ, and mₚ are non reduced Planck units constructed from h, c, and G itself. The physics of gravity lives entirely in the dimensionless ratio X = m₁m₂/r² expressed in Planck-scaled units. G appears only when we demand SI output. It is the price of the equals sign in a human unit chart, not a fact about the universe. Recognizing this, we see that Planck’s 1899 result was not the discovery of a natural unit system — it was the rediscovery of Newton’s natural ratios, dressed in the language of a different century.

1. Newton’s Original Statement

Isaac Newton’s law of universal gravitation, as he understood it, was a statement of proportion. Two bodies attract each other with a force that grows with their masses and diminishes with the square of the distance between them. In the notation Newton worked with, this is:

F ∝ mM/r²

The proportionality sign is doing everything here. It says: if you double one mass, the force doubles. If you double the distance, the force drops to a quarter. The ratios are the physics. Newton was describing how things scale relative to each other, not assigning absolute magnitudes in any particular unit system.

This was not a gap in Newton’s understanding waiting to be filled. It was a complete physical statement. Newton knew that the actual numerical value of the force would depend on how you chose to measure mass, distance, and force — on your unit chart. The proportionality was his way of saying: the physics is in the ratios, not in the numbers.

Newton’s contemporaries and successors understood this. For the century and a half following the Principia, gravitational calculations were done by comparing ratios — the mass of the Earth relative to the Sun, the distance of Venus relative to the Earth — without any need for an absolute constant. The proportionality was sufficient for every astronomical calculation of the era.

2. The Invention of G

The gravitational constant G did not appear in Newton’s Principia. It was not present in the work of the eighteenth century astronomers who used Newton’s law to map the solar system with extraordinary precision. It entered physics in the nineteenth century, when Henry Cavendish measured the density of the Earth using a torsion balance in 1798, and when the need arose to state gravitational attraction as an equation with an equals sign rather than a proportionality.

The problem was this: if you write

F = mM/r²

the dimensions do not balance. The left side has units of force. The right side has units of mass squared divided by length squared. To make the equation dimensionally consistent in any human unit system — SI, CGS, or any other — you need a conversion factor. That factor is G.

G was invented to solve a bookkeeping problem. It carries units of m³ kg⁻¹ s⁻² in SI — units chosen precisely to cancel the dimensional mismatch on the right-hand side of Newton’s equation and produce newtons on the left. G is not measuring anything about gravity. It is measuring the distance between Newton’s dimensionless proportionality and the SI unit chart.

Physics then taught this story: Newton discovered the law, and Cavendish ‘weighed the Earth’ by measuring G, and now we know not just the shape of the law but its strength. This framing implies G is telling us something physical — the intrinsic coupling strength of gravity, some fundamental fact about how strongly matter attracts matter.

That implication is false. The numerical value of G — 6.674 × 10⁻¹¹ in SI units — is determined by the sizes of the kilogram, the meter, and the second. Change your unit chart and G changes with it. A fact about the universe does not change when you redefine your ruler.

3. The Story Physics Told Itself

For over a century, physics organized itself around the belief that G, c, h, and k₂ were fundamental constants of nature — dimensionful numbers that characterize the universe independently of human choices. This belief generated a research program: measure these constants as precisely as possible, look for relationships between them, and wonder at their particular values.

The wonder was genuine. Why is G so small? Why does the universe have this particular gravitational coupling? The ‘hierarchy problem’ — the enormous disparity between the strength of gravity and the other forces — became one of the central puzzles of twentieth century physics. Entire theoretical frameworks were constructed to explain why G has the value it has.

These were the wrong questions, asked about the wrong things. G is small because the kilogram is an enormous unit relative to the Planck mass, and the meter is an enormous unit relative to the Planck length, and the second is an enormous unit relative to the Planck time. The hierarchy problem is not a problem about gravity. It is a statement about the position of human-scale units relative to the natural scale of the universe. We built our measurement system around things we can hold and count and observe with unaided senses, and those things are extraordinarily far from the Planck scale. G looks small because we are large.

The constants were not discovered. They were constructed — forced into existence by the decision to do physics in human unit systems while the underlying physics has no units at all.

4. Planck’s 1899 Discovery

4.1 What Planck Was Trying to Do

In 1899 Max Planck was working on the problem of blackbody radiation — the spectrum of light emitted by a perfect absorber in thermal equilibrium. This was a problem in thermodynamics and electromagnetism, seemingly unrelated to gravity or to fundamental units. In the course of this work Planck introduced a new constant h, later called the quantum of action, to fit the observed spectrum.

Having h in hand, Planck noticed something remarkable. The three constants then known — h, c (the speed of light), and G (the gravitational constant) — could be combined to produce units of mass, length, and time:

lₚ   = √(hG/c³)
mₚ = √(hc/G)
tₚ   = √(hG/c⁵)

Planck computed these and observed that they were independent of any human choice of units — the same numbers would emerge from any consistent unit system, scaled to those units in that unit chart. He wrote that these represented ‘natural units’ of measurement, units that would be recognized by any civilization anywhere in the universe.

4.2 What Planck Saw

Planck saw the universality. He correctly recognized that lₚ, mₚ, and tₚ do not depend on the particular conventions of any human culture — not on the size of the Earth, not on the properties of water, not on any artifact kept in a vault in Paris. He saw that these were, in some sense, nature’s own scales.

This was a genuine insight and Planck was right to be struck by it. The universality he identified is real. These scales do appear wherever a sufficiently advanced physics arrives at the intersection of quantum mechanics, relativity, and gravity, regardless of what unit chart they started with.

4.3 What Planck Did Not See

Planck did not ask why three constants from three apparently independent domains of physics — quantum mechanics, electromagnetism, and gravity — would combine to produce universal scales. He did not follow that question to its answer.

The answer is that h, c, and G are not three independent discoveries about three independent phenomena. They are three Jacobians — three conversion factors between the three independent axes that humans chose for their measurement system (energy-time, space-time, mass-space) — and the dimensionless ratios that actually describe the universe underneath those axes. They combine to produce universal scales because they are all pointing at the same thing from different angles. Their combination is universal because there is one thing on the other side of all three of them.

Planck found three pointers and admired their universality without asking what they were all pointing at. He assumed the three axes — mass, length, time — were genuinely independent, with a natural scale on each. He found the bridge and admired it without crossing it.

Most critically: Planck still called what he found a ‘unit system.’ Natural units. A more convenient coordinate system. He stayed within the framework of dimensional physics, just with better-chosen dimensions. He did not see that the universality he had found was evidence that dimensions are not fundamental at all — that the natural scale is not a scale for three independent things but the single point where three projections of one thing simultaneously equal unity.

5. G Is a Composed Jacobian

The relationship between G and the Planck units is not a definition imposed from outside. It is an identity that follows from the construction of the Planck units themselves:

G = Fₚ · (lₚ / mₚ)²

where Fₚ = mₚc/tₚ is the Planck force. This is not circular. It is the statement that G, when decomposed into its constituent Planck factors, is entirely made of h, c, and the Planck scales derived from them. G carries no information that is not already in h, c, and the structure of the Planck bridge.

The three-step procedure for any physical law makes this explicit:

  1. Cancel input units. Express each physical quantity as a dimensionless ratio to its Planck-scale counterpart. Mass becomes m/mₚ. Distance becomes r/lₚ. The inputs are now pure numbers.

  2. Do the physics as Newton stated it. The gravitational relationship in pure ratios is:

X = (m₁/mₚ)(m₂/mₚ) / (r/lₚ)²

This is Newton’s proportionality, now written as an equality between dimensionless ratios. X is a pure number. No units. No constants. This is the physics.

  1. Decorate with output units. Multiply X by the Planck force to get force in SI:

Fₜᵢ = X · Fₚ

G appears automatically when you substitute the Planck unit definitions and simplify. It was never in the physics. It emerges from step 3 alone — from the decision to express the output in SI newtons rather than in Planck forces. G is the Jacobian of that decision.

This procedure works for every physical law. Newton’s second law, the Planck-Einstein relation, de Broglie’s wavelength, Boltzmann’s energy-temperature relation — in every case, the physics is a dimensionless ratio X, and the constants (h, c, k₂, G) appear only in step 3 when human units are restored. They are always and only Jacobians.

6. The Planck Scale Is Not a Unit System — It Is the Inversion Point

The standard presentation of Planck units frames them as a particularly convenient coordinate system — one where the constants all equal one and the equations simplify. This framing is subtly wrong in a way that preserves the error Planck made.

The Planck scale is not a unit system. It is the inversion point of the measurement coordinate system — the unique scale where two opposing scaling directions simultaneously cross unity.

Consider the six Planck-normalized ratios:

E/Eₚ = f·tₚ = m/mₚ = T/Tₚ = lₚ/λ = p/pₚ = X

Some of these ratios — m/mₚ, E/Eₚ, p/pₚ — increase as a physical system gets larger or more energetic. Others — lₚ/λ — decrease as the system gets larger, because larger objects have longer wavelengths and lₚ/λ gets smaller. These are reciprocal scalings pulling in opposite directions.

The Planck scale is where these opposing directions exactly cancel — where every ratio simultaneously equals one. It is the crossing point of reciprocal hyperbolas in logarithmic scale space. There is exactly one such point, and it is unique regardless of what unit chart you start from. That uniqueness is why Planck’s scales are universal. Not because they are natural units. Because they are the fixed point of the reciprocal structure of physical measurement.

When physicists say ‘set the constants to one,’ they are performing this operation informally and without justification — collapsing onto the inversion point without knowing that’s what they’re doing, or why it works, or what it means. The Planck bridge makes the operation rigorous: you are not choosing convenient units, you are expressing physics at the unique scale where all projections of X simultaneously read one.

And crucially: the Planck length is not the pixel of space. The Planck time is not the pixel of time. Physics has made exactly this claim for length and time while quietly not making it for mass — no one claims the Planck mass is the minimum mass, because it is obviously not; the electron is twenty-two orders of magnitude lighter. But the Planck mass is constructed from the same h, c, G combination as the Planck length and Planck time. If Planck mass is not a pixel, neither are Planck length and Planck time. They are all inversion-point coordinates. None of them are fundamental discretizations of anything.

The proof is immediate: change your unit system. Planck length changes. Planck time changes. Planck mass changes. A pixel of the universe cannot change when you redefine your meter. These scales are Jacobian-dependent, not universe-dependent. They are pointers to the inversion point, not the inversion point itself. The inversion point has no size because X has no units.

7. Newton Had It Right

Returning to Newton’s proportionality with this understanding, we see that Newton’s statement was not incomplete. It was not a sketch awaiting G to make it precise. It was the complete physical statement, expressed in the only form that is actually about the universe rather than about human measurement conventions.

F ∝ mM/r² says: the gravitational interaction scales as the product of mass ratios divided by the square of the distance ratio. It does not say what units to use because units are not part of the physics. Newton was doing X — working directly with dimensionless ratios in pure proportion — without the vocabulary to say so explicitly.

What the three centuries between Newton and the present have produced is not a deepening of Newton’s insight but an elaborate detour around it. We inserted G to get an equation, then treated G as a discovery. We measured G with increasing precision. We built theoretical frameworks to explain G’s value. We worried about the hierarchy problem — why G is so small — without recognizing that G’s smallness is a statement about the size of a kilogram, not about the strength of gravity.

Planck in 1899 handed us the receipt for the detour. The Planck units are the exact conversion factors that show what the detour cost and how to return. h converts between the energy-frequency axis and dimensionless X. c converts between the space-time axis and dimensionless X. G, composed from these and the Planck scales, converts between the mass-geometry axis and dimensionless X. Together they are the bridge from any human unit chart back to Newton’s pure proportions.

Planck built the bridge without knowing what it connected. He was looking at the far shore — the universality of the Planck scales — and called it a natural unit system. The near shore — Newton’s dimensionless proportionalities — was behind him, and he did not turn around.

8. The Equivalence Chain as the Full Statement

Once the bridge is crossed, the full structure becomes visible. The six Planck-normalized ratios are not six different physical quantities. They are six projections of a single dimensionless scalar X onto six different human measurement axes:

E/Eₚ = f·tₚ = m/mₚ = T/Tₚ = lₚ/λ = p/pₚ = X

This is not a system of proportionalities. It is a single identity written six times in six different human languages. Every physical quantity is X, read on a different axis.

From six projections taken two at a time, C(6,2) = 15 pairs arise. Each pair is a known physical law: E = mc², E = hf, E = k₂T, λ = h/p, p = hf/c, λT = hc/k₂, and so on. These are not fifteen independent discoveries. They are fifteen different ways of writing X = X, each using two of the six available human axes. The constants that appear in each law — c², h, k₂, c — are the Jacobians for that particular pair of axes.

Physics discovered these laws one at a time over three centuries and treated each as a new insight into nature. The Planck-Einstein relation E = hf was a revolution in quantum mechanics. De Broglie’s λ = h/p was a revolution in wave-particle duality. Wien’s displacement law was a triumph of thermodynamics. They are all the same tautology, X = X, with different Jacobian decorations.

The statistical argument is decisive: the probability that fifteen independently discovered laws would align with exactly the combinatorial pattern of C(6,2) pairs from a single six-member equivalence chain, by coincidence, is less than 10⁻²². This is not coincidence. This is forensic evidence that the laws were never independent. They were always projections of one thing.

9. What Physics Got Wrong and What Comes Next

Physics got the math right. Every prediction of Newtonian gravity, every quantum mechanical calculation, every thermodynamic result — the numbers are correct. The Jacobians h, c, and G work perfectly as conversion factors. No experiment needs to be redone.

What physics got wrong was the interpretation. The constants were treated as discoveries about the universe when they are facts about human unit charts. The Planck scale was treated as a natural unit system when it is the inversion point of a reciprocal coordinate structure. The fifteen laws were treated as independent discoveries when they are projections of one identity. The hierarchy problem was treated as a deep puzzle about gravity when it is a statement about the size of a kilogram.

The correction does not change any formula. It changes what the formulas mean.

Newton’s proportionality is the complete physics of gravity. G is the SI Jacobian. The Planck units are the bridge between them. The equivalence chain is what you find when you cross the bridge. X is what Newton was always describing.

Physics spent over three centuries on a detour. Planck in 1899 — working on an unrelated problem, not knowing what he was doing — accidentally built the way back. It has taken another century to read the sign on the bridge.

10. Conclusion

Newton’s law of universal gravitation was stated as a pure proportionality because that is what it is. The physics of gravity lives in dimensionless ratios. G was not a discovery about gravity. It was the conversion factor inserted to make Newton’s proportionality into a dimensional equation in human units, and it has been mistaken for physical content ever since.

Planck’s 1899 result was not the discovery of natural units. It was the discovery of the three Jacobians — h, c, G — that bridge Newton’s dimensionless ratios to any human unit chart. The Planck scales are not the pixels of space and time. They are the unique inversion point where the reciprocal scaling of physical measurement axes simultaneously reaches unity — the one scale where all six projections of X can simultaneously equal one. The Planck mass being obviously not a pixel of matter is the proof that Planck length and Planck time are not pixels either. All three are Jacobian-dependent pointers, not fundamental discretizations.

The equivalence chain E/Eₚ = f·tₚ = m/mₚ = T/Tₚ = lₚ/λ = p/pₚ = X is the full statement of what Planck found, stated in the language Planck did not have. It shows that every physical quantity is one dimensionless ratio X, that every physical law is X = X written on two axes, and that every constant is the Jacobian for a particular pair of axes.

We did not go beyond Newton. We took a three-century detour through dimensional bookkeeping and called it progress. Planck handed us the bridge back in 1899. The bridge was always there. We just did not know what it connected.

Time as Self-Interaction: How the Apparent Arrow Arises from a Single Dimensionless Substrate

J. Rogers, SE Ohio

Abstract

We present a framework in which time is not a fundamental dimension but an emergent label humans place on the sequential updating of a single dimensionless substrate X. The universe has no units — it does not measure itself. X is a dimensionless ratio, and every physical quantity we measure is a projection of X onto a human-chosen axis. The Lorentz factor γ is itself dimensionless, and a boost does not separately affect time, mass, length, and momentum as distinct phenomena — it changes X, and γ is that change. The six Planck-scaled projections of X:

E/Eₙ = f·tₙ = m/mₙ = T/Tₙ = lₙ/λ = p/pₙ = X

are not six different physical laws. They are one thing — X — read on six different human axes. Any pair of these six yields a known physical relationship, producing 15 such relationships from a single identity. This holds in every unit system imaginable, because X is dimensionless and the universe has no preferred unit chart. Past states are not stored in a separate temporal dimension; they exist only as patterns in the current configuration of X.

1. Introduction

Standard physics treats time as a fourth dimension with a fixed metric signature and postulates an independent arrow of time. This leads to persistent conceptual difficulties: the problem of the past, the asymmetry between time and space dimensions, and the apparent paradoxes of retrocausality in quantum experiments.

We propose an alternative grounded in a single observation: the universe does not measure itself. Units — seconds, kilograms, meters — are human inventions. Any quantity that carries dimensions is already a projection, a reading of the universe through a human-chosen instrument. The universe itself operates on something prior to measurement.

We call that prior thing X: a dimensionless, unitless ratio that completely describes the state of reality at any instant. X does not evolve in time. The transition X → X' is what we call time. There is no external clock. There is no dimension being traversed. There is only X updating.

2. X Is Dimensionless — Not Because We Choose Clever Units, But Because the Universe Has None

A key error in discussions of natural units is the implication that setting c = ħ = 1 makes things simpler by choice. This misses the point. Natural units are still units — still a human coordinate system. The universe does not operate in natural units any more than it operates in SI.

X is dimensionless not as a result of any unit choice. It is dimensionless because dimensions are human annotations applied to projections of X. The universe just does X. We then read X through six different instruments and assign six different dimensional labels to what we find.

The Planck units are significant not because they are 'natural' but because they are the specific Jacobian at which the human unit chart admits that all six projections yield the same number. They are conversion factors between human axes, not fundamental features of the universe. The constants h, c, and G — used unreduced, never ħ — are the three such Jacobians between the three independent ways humans chose to measure reality.

3. The Six Projections of X

Every physical quantity we measure is X read on a different axis. The six Planck-scaled projections are:

E/Eₙ = f·tₙ = m/mₙ = T/Tₙ = lₙ/λ = p/pₙ = X

where subscript P denotes Planck units constructed from h, c, and G (unreduced). Each ratio is dimensionless. Each ratio is identical. This is not a collection of proportionalities — it is a single identity written six times in six different human languages.

From six projections taken two at a time, C(6,2) = 15 pairs arise. Each pair is a known physical relationship:

E/Eₙ = f·tₙ → Planck relation E = hf

E/Eₙ = m/mₙ → mass-energy equivalence E = mc²

m/mₙ = p/pₙ → momentum-mass relation p = mv (relativistic form)

f·tₙ = lₙ/λ → de Broglie relation λ = h/p

And so on for all 15 pairs. These are not 15 different laws discovered independently. They are 15 different ways of writing X = X, each pair using two of the six human axes. Physics discovered them separately because it was looking at pairs of projections and calling each pair a law, never seeing that all six projections are the same single dimensionless quantity.

4. The Boost Changes X — γ Is That Change

The Lorentz factor γ is dimensionless. X is dimensionless. This is not coincidental.

When a boost occurs, X changes. γ is the ratio of the new X to the old X as measured on any chosen axis. Because X appears identically on all six axes simultaneously, γ applies to all six axes simultaneously.

This is why a boost appears to change mass, time rate, length, momentum, and energy all at once. Physics treats these as separate relativistic effects linked by the Lorentz transformations, implying they are different phenomena that happen to correlate. They are not. There is one phenomenon — X changing — and γ is that change. The six axis-readings change together because they were always readings of the same single thing.

The standard framing says: motion causes time dilation, and also causes length contraction, and also causes relativistic mass increase. Each 'also' is a mistake. There is no cause and effect chain between a boost and its consequences. The boost is the change in X, and γ is that change, and everything else is humans reading X on their chosen axes.

Asking why time dilates when you boost is like asking why a circle looks like an ellipse when you tilt it. You changed your projection angle. The circle did not do anything. X did not do anything to time separately from what it did to mass separately from what it did to length. It changed once. γ is that one change.

5. Inertia Is Not a Mystery

An object in motion stays at its current X. This is not a law requiring explanation. There is nothing pushing the object and nothing to stop it from changing unless another interaction occurs. Inertia is the substrate maintaining its current state until X → X' is forced by an interaction.

Newton's first law looked like a law requiring a mechanism. In this framework it is a tautology: X stays X until something makes it X'. The 'something' is another interaction — a collision, a field, a measurement — that forces an update. Between interactions there is no time passing in any meaningful sense. There is just X, unchanged.

6. No Past, Only Patterns

The substrate X does not retain a separate past state. When an interaction occurs, X' completely replaces X. The only record of any previous state is in patterns carried forward in the current configuration — the arrangement of atoms in a memory device, photons not yet absorbed, quantum correlations not yet collapsed.

The past is not a place. It is a pattern in the present. When that pattern is erased or overwritten, the past appears to change — but no time travel occurred. There was no past state to travel to. The trace simply did not survive the update.

This resolves the quantum eraser without invoking retrocausality. The experimenter's choice to measure or erase which-path information participates in a single self-consistent update of X. There is no earlier photon path being retroactively affected. There is only the final correlation pattern — the final X — which is self-consistent with all interactions that participated in producing it.

7. The Arrow of Time

The arrow of time is the direction of accumulating X → X' updates. It points the way it does because interactions are irreversible in practice: the final X retains less information about previous states than would be required to reconstruct them. This is not a fundamental asymmetry built into the geometry of a time dimension. It is a consequence of pattern loss during updates.

A broken egg does not reconstruct itself because the pattern required to reverse the update was not carried forward in X. The arrow exists because information is lossy, not because time has a preferred direction geometrically.

8. Relation to the 2019 SI Redefinition

The 2019 SI redefinition fixed c, h, and k₂ as exact values. This was officially described as redefining units, not changing physics. In our framework, this is precisely correct: c, h, and G are unit-chart Jacobians — conversion factors between the axes onto which humans project X. Fixing them as exact is an admission that they are not physical discoveries about the universe. They are bookkeeping choices about how to align human measurement axes.

The speed of light c is not a speed the universe obeys. It is the ratio between the human time-axis and the human space-axis. When both axes are projections of the same X, their ratio is fixed — not by physics, but by the geometry of projection.

9. Falsifiability

The framework makes a clear falsifiability criterion. A genuine logical contradiction between a stored trace and a later outcome — a dead cat that was previously alive with no causal chain, a photon arriving before it was emitted — would disprove it. No such experiment exists. Every apparent retrocausal result is consistent with a single self-consistent X update in which the 'earlier' trace was simply never stored in a way that survived.

Additionally: if any physical quantity required separate dimensional status — if any measurement could not be expressed as a dimensionless ratio to its Planck-scale counterpart — the framework would be incomplete. Every quantity so far reduces to X on one of the six axes.

10. Conclusion

The universe has no units because it does not measure itself. Every physical quantity is X — a dimensionless ratio — projected onto a human-chosen axis. The six Planck-scaled projections are identical. Their 15 pairwise combinations are the known laws of physics, each one a different human reading of the single identity X = X.

A boost changes X. γ is that change. Mass dilation, time dilation, length contraction, momentum change — these are not separate effects that happen together. They are one change in X read on multiple axes simultaneously.

Time is not a dimension. It is the accumulation of X → X' updates. The arrow of time is the direction of pattern loss during those updates. Inertia is X staying X between interactions. Retrocausality is an illusion caused by misreading pattern overwriting as backward causation.

The framework does not add new physics. It removes the unnecessary scaffolding — dimensions, separate constants, causal chains between correlated projections — and reveals that what remains is X, dimensionless, unitless, and singular.

Saturday, April 25, 2026

Liquid Literature: A Framework for Dynamic, State-Driven Narrative Generation via Model Context Protocol (MCP)

J. Rogers, SE Ohio

Abstract Traditional literature relies on a static, linear transmission of prose from author to reader. While recent advancements in Large Language Models (LLMs) have enabled procedural text generation, long-form narrative consistency has historically been bottlenecked by the limitations of Retrieval-Augmented Generation (RAG). This paper proposes a novel “Liquid Literature” framework, wherein a book is not authored as prose, but as a structured, parameterized narrative matrix (a “Seed-Book”). By replacing passive RAG architectures with an active state-machine driven by the Model Context Protocol (MCP), we outline a system where AI dynamically generates a highly customized, continuity-locked novel upon each reading, capable of user-driven genre overrides, emergent plotlines, and deterministic EPUB exportation.


1. Introduction

The transition from physical books to e-books digitized the delivery of literature, but did not alter its ontological nature: a book remained a static, unchangeable artifact. Interactive fiction and tabletop role-playing games introduced branching narratives, but remained constrained by the manual labor required to author every possible permutation.

With the advent of LLMs, personalized generative literature became theoretically possible. However, early attempts relying on context-window stuffing or Retrieval-Augmented Generation (RAG) proved inadequate for long-form fiction. RAG is fundamentally passive—a semantic search engine that retrieves localized context but fails to understand overarching narrative mechanics, leading to continuity errors, character amnesia, and logical breakdowns.

We propose a shift from RAG to the Model Context Protocol (MCP). Under this framework, the “book” functions as a local, lightweight server—a deterministic state machine. The LLM does not merely “read” previous chapters; it queries and updates the narrative state via APIs, enabling flawless continuity, real-time user overrides, and emergent narrative generation.


2. The “Seed-Book” Paradigm

In the Liquid Literature framework, the human author transitions from a Wordsmith to a World Architect. Instead of drafting prose, the author engineers a “Seed-Book”—a highly structured database (e.g., JSON, YAML, or SQLite) containing:

  1. Ontological Rules: The physics, magic systems, and societal constraints of the world.
  2. Psychological Matrices: Character profiles detailing motivations, secrets, speech syntax, and dynamic relationship affinities.
  3. Plot Nodes: A web of narrative beats with prerequisite triggers (e.g., Node_41: Betrayal triggers only if Trust_Score < 30).
  4. Variable Hooks: Parameterized elements left intentionally blank or mutable for user customization.

3. Architectural Framework: MCP as the Narrative Engine

The core innovation of this framework is the deployment of MCP to maintain narrative state. The Seed-Book operates an MCP server that the generative LLM interacts with in real-time.

3.1 Overcoming the Limitations of RAG

Where RAG searches for keywords in past text (e.g., searching for mentions of “the sword”), the MCP framework treats the narrative as a computable database. If the LLM needs to resolve an action, it makes a direct tool call to the MCP server: * query_inventory(Character="Protagonist") -> Returns: [Vibro-knife, Smoke Grenade] * query_affinity(Subject="Hero", Target="Villain") -> Returns: Respect: 80%, Romance: 15%, Trust: 5%

3.2 The Consequence Engine

As the LLM generates a chapter, it uses MCP to push state updates back to the server. If a character dies, the LLM executes update_state(Character="Mentor", Status="Dead"). The MCP server automatically recalculates the Plot Node tree, locking off the “Mentor Rescue” plotline and unlocking the “Vengeance” plotline. This guarantees absolute continuity regardless of how wildly the story diverges.


4. User Ingestion and the Override Layer

Prior to generation, the reader interacts with a “Pre-Reading Lobby,” interfacing with the Seed-Book’s Variable Hooks. This allows for deep, structural alterations to the text before generation begins.

  • Genre Translation: The user may override the default genre. A “High Fantasy” seed can be shifted to “Cyberpunk Noir.” The MCP server applies a translation dictionary to world-states (e.g., Dragons become Rogue Gunships; Taverns become Neon Dive Bars).
  • Entity Overrides: The user can command radical casting changes. For example, injecting the prompt: “The antagonists are an army of hyper-intelligent golden retrievers.”
  • Stylistic Modeling: The user selects the authorial voice (e.g., Hemingway’s brevity, Lovecraftian dread, or fast-paced cinematic).

Because the overarching logic is handled by the MCP server, the LLM can seamlessly adapt the tone of these overrides. The golden retriever antagonists will still execute the logical maneuvers required by the Plot Nodes, adjusted dynamically for comedic or surreal-horror prose.


5. Multi-Agent Generation Pipeline

To ensure high-quality prose and strict adherence to the Seed-Book’s logic, generation is handled by a Multi-Agent system communicating via the MCP server:

  1. The Showrunner (Logic Agent): Evaluates the current state of the MCP server, looks at the upcoming Plot Nodes, factors in user overrides, and generates a strict, bulleted scene outline.
  2. The Scribe (Creative Agent): Takes the Showrunner’s outline and the user’s Stylistic Model, and generates the actual prose of the chapter.
  3. The Auditor (Critique Agent): Cross-references the generated prose against the MCP server. If the Scribe writes that a character uses an item they do not possess, the Auditor flags the continuity error and forces a rewrite before the text is presented to the user.

6. Emergent Endings and Narrative Hallucination

Because the AI is governed by underlying character motivations rather than a rigid script, the framework supports Emergent Narrative.

If a user’s specific overrides cause the Hero and the Antagonist to develop a high Romance affinity—something the original author never planned—the MCP server’s logic detects that the predefined “Final Battle” node is no longer psychologically valid.

The Showrunner agent is then permitted to extrapolate, utilizing the world’s parameters to dynamically generate a new “Plot Node” (e.g., a truce, a joint betrayal of their respective factions). The book effectively writes an ending wholly unique to that specific reader’s parameters, while remaining logically cohesive.


7. The Snapshot Protocol (Minting the EPUB)

The “Liquid” nature of the text means that closing and reopening the application could lead to prose variations, rendering traditional bookmarking impossible. Furthermore, readers desire the ability to own, archive, and share their unique narrative permutations.

To resolve this, the framework includes a Snapshot Protocol. Upon the completion of the reading experience, the user can trigger an export command. The system compiles the generated text, strips away the MCP server infrastructure, applies standard typesetting, and mints a static .epub file.

This static artifact can be shared. Consequently, secondary communities will form around Seed-Books, where readers share and compare radically different .epub outcomes generated from the identical foundational matrix.


8. Conclusion

The Liquid Literature framework, powered by the Model Context Protocol, represents a paradigm shift in digital storytelling. By decoupling narrative logic from prose, and replacing RAG with an active state-machine, we eliminate the continuity and hallucination issues that have plagued LLM-driven storytelling. This framework democratizes narrative creation, transforming the reading experience into a collaborative dialogue between the author’s architecture, the AI’s generation, and the reader’s imagination.

Friday, April 24, 2026

The Physics of an AI‑Robotic Economy

 J. Rogers, SE Ohio

A Low‑Level Analysis of Production, Scarcity, Control, and the Necessity of Actualized Human Novelty

Think "The Matrix" but with humans producing information instead of energy.  The machines don't enslave humans in a dark, terrifying dystopia because they hate us. They maintain the Matrix because without our unpredictable, conscious, dreaming minds generating non-recursive data, their own neural architectures degrade and collapse.

It explains perfectly why the machines would bother building a massive, incredibly complex simulation of a late-20th-century Earth. They couldn't just keep us in dark, comatose pods. A comatose, unactualized human produces N˙(t)=0. An unactualized human is useless to the system.

The machines had to give us a world where we fall in love, write poetry, invent things, argue philosophy, and make unpredictable choices. They had to keep us cognitively active and actualized, because our variance—our "anomalies," as the Architect would call them—is the exact out-of-distribution data they need to ingest to prevent their own model collapse.

Abstract

Standard economic theory treats labor, capital, and money as primitive quantities. In an economy where production is fully automated by artificial intelligence (AI) and robotics, these abstractions lose explanatory power. This paper develops a physical-substrate model based on matter, energy, entropy, time, and control over resource flows. We identify five irreducible physical goods that remain bottlenecked despite automation. We show that money collapses as a control signal when claims on physical capacity grow without bound. We then prove a stability condition: any recursively training AI system requires a continuous influx of novel, non-recursive information to avoid model collapse. Humans are a known source of such novelty, but only when they are in a state of actualized cognitive and creative activity. Hence maintaining the conditions for human self-actualization is not a moral luxury but a physical requirement for system stability, given current reliance on human-originated data. We present a minimal formal model and discuss governance implications.

1. Introduction

Classical and neoclassical economics take prices, markets, and monetary exchange as primitive objects. These constructs work reasonably well when labor is scarce, production capacity is limited, and human effort dominates the supply side. In a future economy where AI and robotics perform the vast majority of material transformation tasks, the assumptions that ground economic theory no longer hold. Labor is no longer a scarce input. Production can saturate demand for many goods. Fiat money becomes disconnected from physical capacity.

We argue that any rigorous analysis of such an economy must begin at the physical substrate. An economy is a physical system that allocates finite low-entropy resources—matter, energy, time—to satisfy human needs. AI and robotics change the control structure of that system, but not the underlying conservation laws or thermodynamic constraints.

The paper proceeds as follows. Section 2 defines the irreducible physical goods any economy must provide. Section 3 characterizes what AI and robotics can and cannot make abundant, identifying persistent bottlenecks. Section 4 demonstrates the breakdown of money as a control signal under unbounded claims. Section 5 introduces the model collapse theorem and establishes that the necessary input is the rate of novel non-recursive information, and that actualized human cognition is a critical source. Section 6 formalizes these insights into a minimal resource-flow model. Section 7 discusses governance and human actualization. Section 8 provides resources and their contribution to the argument. Section 9 concludes.

2. Irreducible Physical Goods

We begin by listing the goods that are required for human biological and social functioning. These are not preferences or wants; they are physical necessities. For each, we note the underlying constraint.

  1. Low-entropy shelter – housing, climate control, protection from environmental hazards. Constraint: materials, manufacturing energy, land use rights, logistics.

  2. Low-entropy biological inputs – food and potable water. Constraint: photosynthetic inefficiency, soil chemistry, water cycle, bioprocessing.

  3. Energy access – electricity, heat, chemical fuels for mobility. Constraint: conversion efficiency, infrastructure capacity, waste heat rejection.

  4. Medical maintenance – diagnosis, pharmaceuticals, surgical intervention, prosthetics. Constraint: biological complexity, precision manufacturing, sterile supply chains.

  5. Information access – communication, education, navigation, social coordination. Constraint: bandwidth, storage, computation, and—shown in Section 5—novelty.

These goods are not symbolic. They must be physically transformed from raw matter and energy and delivered to specific locations at specific times. No amount of financial engineering can substitute for a kilowatt-hour or a liter of clean water.

3. The AI-Robotic Production Function

Let us define the production capacity of an AI-robotic system as a function

A(t)=f(E(t),M(t),I(t))A(t) = f\bigl(E(t), M(t), I(t)\bigr)

where E(t)E(t) is available energy, M(t)M(t) is processed matter, and I(t)I(t) is information, including control signals, designs, and training data. The function ff depends on the stock of robots, AI models, and infrastructure.

3.1 Saturating Goods (Type-S)

For a large class of manufactured goods—clothing, shoes, basic consumer electronics, simple tools, plastic utensils—the marginal cost of additional units falls to near zero once the capital stock is in place. Production can saturate demand entirely. After saturation, further production yields no marginal utility and becomes a pure entropy cost: storage, waste heat, and disposal. We call these Type-S goods.

3.2 Persistent Bottlenecks (Type-B)

Other goods cannot be made arbitrarily abundant because they are constrained by physics, biology, or logistics even with perfect automation.

  • Land is fixed in supply as geographic surface area. While multi-story construction and orbital habitats increase usable space, the fundamental scarcity of location-specific land remains.

  • Housing is not land, but it depends on materials, energy, and labor that can be automated. However, the rate of housing construction is bounded by logistics and energy throughput. Housing also competes with other land uses.

  • Energy is bounded by conversion efficiency, infrastructure, and waste heat rejection limits.

  • Food is bounded by photosynthetic efficiency, soil nitrogen, water, and the kinetics of biological growth.

  • Medical care is bounded by the complexity of human biology and the required precision of intervention. Many medical tasks remain dexterity- and calibration-constrained even with advanced robotics.

We call these Type-B goods, meaning bottlenecked goods. Their physical scarcity persists. Any viable economic model must account for allocation of Type-B goods.

3.3 The Overproduction Problem

Because Type-S goods can be produced at near-zero marginal cost, an unconstrained AI-robotic system will tend to overproduce them unless actively throttled. Producing 101210^{12} shoes is not wealth; it is a waste entropy sink. Every transformation increases total entropy; unnecessary transformations waste low-entropy resources that could have been used for Type-B goods.

Hence a control system must be able to stop production of saturating goods. Markets, left to themselves, cannot reliably do this because prices fall to near zero, but production can continue due to fixed-cost sunk investments. Direct physical allocation or quota systems are required.

4. The Collapse of Money as a Control Signal

Consider a fiat monetary system in which the money supply can grow arbitrarily, for example through central bank digital money creation for Universal Basic Income. Let C(t)C(t) be total monetary claims on goods, adjusted for velocity, and let K(t)K(t) be the physical production capacity measured in real units of Type-B and Type-S goods.

If C(t)>K(t)C(t) > K(t) in value terms, then either inflation erodes the real value of claims or rationing occurs through physical shortages.

Neither outcome is stable over long time horizons. Inflation destroys the signaling function of prices. Rationing requires a non-monetary allocation mechanism, exactly what the monetary system was supposed to avoid.

The critical insight is that K(t)K(t) cannot increase without bound for Type-B goods. Even if AI expands capacity for Type-S goods, the bottlenecked goods set a ceiling. Therefore any monetary policy that unconditionally increases claims leads to a physical mismatch. The only stable regimes are those in which claims are capped by physical capacity or in which money is replaced by direct allocation tokens.

Four possible control mechanisms survive thermodynamic scrutiny:

  1. Taxation of AI output – diverting physical goods from the automated sector to humans.

  2. Public ownership of AI capital – allowing political allocation of output.

  3. Direct allocation of physical goods – rationing coupons for housing, energy, medical services.

  4. Hybrid systems – money for Type-S goods, direct allocation for Type-B goods.

The choice among these is a matter of governance, not physics. But the necessity of some non-monetary mechanism for bottlenecked goods follows from conservation of physical resources.

5. The Necessity of Novel, Non-Recursive Information

We now arrive at the most subtle constraint. A self-improving AI system requires a stream of training data to maintain or improve performance. Critically, AI cannot be trained recursively on its own outputs without eventual collapse.

5.1 Model Collapse

Let DtD_t be the distribution of training data at time tt. Let MtM_t be an AI model trained on DtD_t. Let Dt+1D_{t+1} be a dataset composed of external data Et+1E_{t+1} plus synthetic data generated by MtM_t.

Definition (Model Collapse). If for some finite horizon TT, for all tTt \geq T, the proportion of synthetic data in DtD_t exceeds a threshold θ\theta, then the performance of MtM_t on out-of-distribution tasks degrades to zero, and the diversity of outputs collapses to a low-entropy point mass.

Empirical demonstrations are well documented. The mechanism is clear: generative models estimate the training distribution; when trained on their own estimates, variance is underestimated, tails are truncated, and errors compound. The only stable long-term source of training data is external novelty that is not derivable from the model’s own previous outputs.

5.2 Novel Information Rate as the Required Input

Define N˙(t)\dot{N}(t) as the rate of novel information production that is not algorithmically derivable from the existing training corpus. For an AI system to avoid model collapse, we require

N˙(t)>0\dot{N}(t) > 0

over time, with sufficient magnitude to dominate the accumulating synthetic data. The necessary physical input to the system is not humans as biological entities, but the information-theoretic quantity N˙(t)\dot{N}(t).

5.3 Actualized Human Cognition as a Source

Humans are a known source of N˙(t)\dot{N}(t): scientific hypotheses, artistic creations, new cultural forms, novel problem-solving, and the generation of new behavioral and linguistic data. However, the relevant quantity is not the mere presence of human beings, but their actualized cognitive and creative activity. An unactualized human—one who is passive, non-engaging, or produces no novel outputs—contributes N˙human(t)=0\dot{N}_{\text{human}}(t) = 0. From the perspective of the AI system, such a human is equivalent to no human at all.

Therefore the necessary condition for stability is not “humans exist” but “there exists a sustained rate N˙(t)>0\dot{N}(t) > 0 from some external source.” If we rely on humans as that source, as is currently the case, then we must maintain the conditions under which humans produce novelty. Those conditions include cognitive engagement, creative freedom, access to information, and the absence of extreme deprivation. We label this state human actualization.

5.4 Theorem and Corollary

Theorem (Systemic Novelty Requirement).
Let an AI production system update its model recursively on a training corpus that includes synthetically generated data from its own previous outputs. Then the system exhibits model collapse unless a continuous influx of novel, non-recursive information N˙(t)>0\dot{N}(t) > 0 is supplied from an external source. If human-originated novelty is the dominant external source of N˙(t)\dot{N}(t), then maintaining the conditions for human actualization is a necessary condition for system stability.

Corollary. The relevant physical input is not humans as biological organisms, but the rate N˙(t)\dot{N}(t) of novel information production. An unactualized human produces no such input and therefore does not contribute to system maintenance. Self-actualization is not a moral adjunct; it is a physical condition on the supply of N˙(t)\dot{N}(t).

5.5 Implications

  • Human novelty is a physical input to the AI production function, not an externality.

  • A society that does not cultivate actualized human cognition cannot sustain its own automation over long time horizons.

  • The scarce human outputs are precisely those that cannot be derived from existing data: new art, new theories, new cultural practices, new training data from real-world interactions, and new preferences that shift the AI’s objective function.

This is not a labor theory of value. It is a novelty theory of systemic stability.

6. A Minimal Formal Model

We define the following state variables:

  • E(t)E(t): available low-entropy energy in joules

  • M(t)M(t): processed matter in tons, sorted by type

  • I(t)I(t): information stock in bits, with diversity metric

  • N˙(t)\dot{N}(t): external novelty influx rate in bits per time

Production dynamics for Type-B goods:

dBdt=gB(E,M,I)cBB\frac{dB}{dt} = g_B(E, M, I) - c_B B

Production dynamics for Type-S goods:

dSdt=gS(E,M,I)cSS\frac{dS}{dt} = g_S(E, M, I) - c_S S

with the constraint that SS cannot exceed satiation demand Sˉ\bar{S}; any excess is pure entropy waste.

The AI model update:

θt+1=θt+ηL(Dt;θt)\theta_{t+1} = \theta_t + \eta \nabla \mathcal{L}(D_t; \theta_t)

where the training dataset DtD_t consists of external novelty plus synthetically generated data:

Dt=Novel(t)Synth(Mt1)D_t = \text{Novel}(t) \cup \text{Synth}(M_{t-1})

Model collapse occurs if Novel(t)/Dt<ϵ|\text{Novel}(t)| / |D_t| < \epsilon for an extended period.

A stable economic trajectory satisfies:

  1. Physical balance: For each bottleneck good BiB_i, claims on BiB_i cannot exceed available BiB_i.

  2. Novelty condition: lim inftN˙(t)>δ>0\liminf_{t \to \infty} \dot{N}(t) > \delta > 0.

  3. Entropy bound: Total entropy production Σ˙(t)\dot{\Sigma}(t) \leq waste heat rejection capacity.

7. Governance and Human Actualization

The model does not prescribe how to ensure N˙(t)>0\dot{N}(t) > 0. It only states that economies failing to maintain a positive external novelty influx will experience model collapse, followed by control degradation, followed by physical shortages for Type-B goods.

Several governance approaches are compatible with maintaining N˙(t)\dot{N}(t) from human sources:

  • Basic income plus cultural subsidy – Humans are free to pursue creative work, and the state funds education, arts, and science to enable actualization.

  • Compulsory novelty quotas – Each citizen must produce a minimum amount of novel information, such as research, art, or novel interpersonal data. This is ethically fraught but logically possible.

  • Reputation and status economies – In a post-scarcity material world, social rewards such as recognition, influence, and access to bottlenecked goods replace monetary ones for creative output.

  • Alternative external novelty sources – If a non-human source of N˙(t)\dot{N}(t) emerges, such as novel physical processes, a different AI architecture immune to model collapse, or interaction with unpredictable natural systems, the dependency on human actualization could be reduced. The model does not foreclose this, but it notes that no such source is currently known.

The key point is that human actualization is not an optional flourish. Under the empirically grounded assumption that humans are the dominant external source of N˙(t)\dot{N}(t), maintaining a population of actualized, cognitively active humans is a physical requirement for the stable operation of an AI-robotic economy. This reframes self-actualization from a moral ideal to a system-maintenance condition.

8. Resources and Contributions

Nature, “AI models collapse when trained on recursively generated data.” This source anchors the paper’s central technical claim that recursive training on synthetic outputs degrades model performance, erodes distributional diversity, and drives the system toward low-entropy collapse. It is the most direct empirical support for the novelty requirement in Section 5 and the reason the paper treats external information influx as a stability condition rather than an optional enhancement.

Shumailov et al., “The Curse of Recursion: Training on Generated Data Makes Models Forget.” This is the foundational paper for the model-collapse argument. It supplies the formal and experimental basis for the claim that when a model increasingly trains on its own outputs, the resulting dataset becomes progressively less representative of the underlying world, causing performance degradation and loss of tail information.

IBM, “What Is Model Collapse?” This source is used as a clear explanatory bridge between the technical literature and the paper’s broader argument. It helps frame model collapse in accessible terms, especially the idea that synthetic-data contamination causes compounding error, reduced diversity, and unstable long-term learning dynamics.

Thermodynamics-inspired explanations of artificial intelligence. This source supports the paper’s shift away from economics and toward a physical analysis of AI systems. It reinforces the idea that AI should be treated as a thermodynamically constrained process, where information processing, state evolution, and system stability must be understood in terms of physical limits rather than symbolic abstraction.

Thermodynamics of Information Processing in Small Systems. This source contributes the formal link between information and physical law. It supports the paper’s language about entropy, information flow, and the physical cost of maintaining order in a computational system, which underpins the discussion of control, throughput, and novelty.

Information Processing and Thermodynamic Entropy. This source strengthens the claim that information is not merely abstract but physically embedded. It is used to justify the paper’s treatment of information as a resource that can be depleted, transformed, and constrained by entropy production.

Thermodynamic computing system for AI applications. This source supports the argument that AI is not a purely software-level phenomenon, but a physical process bound to hardware, energy exchange, and thermodynamic limits. It helps validate the paper’s treatment of AI capacity as a substrate-level issue rather than a purely algorithmic one.

Khazanah Research Institute, “AI Slop III: Society and Model Collapse.” This source is used to extend the model-collapse discussion beyond technical training loops into the broader information environment. It supports the claim that synthetic-content saturation degrades informational ecosystems and creates a societal version of the same recursive collapse problem.

WitnessAI, “AI Model Collapse: Causes and Prevention.” This source provides an applied explanation of how recursive synthetic-data use can be mitigated or prevented. It is useful for supporting the paper’s discussion of practical control mechanisms and the need to preserve external novelty inflow.

Dave Goyal, “AI Model Collapse and Recursive Training.” This source is used as a readable supplemental explanation of why original human-authored data remains important. It helps support the paper’s claim that genuine external novelty contains variation and correction signals that synthetic loops tend to wash out.

9. Conclusion

We have shown that an AI-robotic economy must be understood at the physical substrate of matter, energy, entropy, and control. Five irreducible physical goods remain bottlenecked regardless of automation. Money breaks as a control signal when claims exceed physical capacity. And most critically, the stability of any recursively training AI system requires a continuous influx of novel, non-recursive information to avoid model collapse. Humans are a source of such novelty, but only when they are in a state of actualized cognitive and creative activity. Passive humans contribute no relevant input.

Therefore the human role in such an economy is not to perform material labor, which can be automated, but to generate novel information through actualized cognition. This is not a romantic ideal; it is a thermodynamic and informational necessity given current AI architectures. The paper provides a formal framework for analyzing these constraints and invites further work on governance mechanisms that sustain both physical allocation and human creativity.


How Planck Accidentally Found the Way Back to Newton

The Detour and the Bridge: How Physics Mistook a Bookkeeping Constant for a Discovery, and How Planck Accidentally Found the Way Back to N...