Mastodon Politics, Power, and Science

Sunday, May 3, 2026

Which Second Does G Introduce?

J. Rogers, SE Ohio

On the Self-Referential Temporal Ambiguity of the Gravitational Constant

A Foundational Critique of Dimensional Analysis in Gravitational Physics

Abstract

Newton's gravitational constant G carries units of m³ kg⁻¹ s⁻². The s⁻² term introduces a specific time scale into the law of gravitation. However, general relativity establishes that gravity is not a force acting across a fixed time — it is a gradient of time rates. Every point in a gravitational field has its own proper time, running at a rate that depends on the local gravitational potential. This paper poses a question that has not been formally addressed in the literature: which second does G introduce? We demonstrate that this question has no well-defined answer, that G's temporal dimension is therefore physically ambiguous, and that this ambiguity is the root cause of G's notorious measurement inconsistency across experiments spanning 200 years. We further show that G/c² — which appears in the dimensionless gravitational parameter τ = (G/c²)(m/r) — is free of this ambiguity, is known to GPS precision (10 significant figures), and is the only combination of G that the universe actually uses.

1. The Unit Contamination Problem

Newton's law of gravitation is standardly written as:

F = G · mM / r²

where F is force in kg·m·s⁻², m and M are masses in kg, r is distance in meters, and G = 6.674 × 10⁻¹¹ m³ kg⁻¹ s⁻². The dimensional structure reveals an immediate problem: the quantities mM/r² have units of kg²/m². They contain no time. Gravity, as a geometric relationship between masses and distances, introduces no clock.

G injects s⁻² into this equation for one reason only: Newton's second law F = ma defines force to include acceleration, which is measured against a clock. The second was already in F via kinematics. G absorbs s⁻² as a compensating factor to preserve dimensional consistency across a unit system that was never designed for gravitational physics.

The alternative is immediate. Define force geometrically:

F ≡ mM / r² [units: kg²/m²]

Then gravity is exact, unit-free in the physical sense, and contains no clock. The constant migrates:

F = ma / G ⇒ G = ma / F

G becomes the constant of inertia — the conversion factor between geometric force and kinematic response. The temporal ambiguity now lives in kinematics, where it belongs, not in the description of gravitational geometry.

2. Gravity Is a Time Gradient

General relativity does not describe gravity as a force. It describes gravity as spacetime curvature, and in the weak-field limit, this curvature is predominantly temporal. The gravitational redshift formula is:

Δf/f = ΔΦ / c² = (G/c²) · (M/r)

A clock deeper in a gravitational well runs slower. The rate difference between two clocks at different gravitational potentials is continuous, position-dependent, and exact. This is not a perturbative correction to flat-space physics — it is the physics. Gravity IS the gradient of proper time rates across space.

GPS confirms this operationally. Satellite clocks must be corrected for gravitational time dilation to maintain nanosecond synchronization. These corrections are computed using τ = (G/c²)(M/r) and are accurate to 10 significant figures. GPS does not fail at the 5th significant figure despite G being known only to 5 significant figures.

This is the central empirical fact that demands explanation.

3. The Self-Referential Temporal Ambiguity

We now state the core problem precisely.

G has units m³ kg⁻¹ s⁻². The s⁻² encodes a specific time rate — a second. Every Cavendish-style measurement of G uses a clock to measure acceleration, force, or oscillation period. That clock runs at a rate determined by the local gravitational potential.

But G is supposed to describe the gravitational potential itself.

The second embedded in G is therefore evaluated inside the very field that G is meant to characterize. This is not a small systematic error. It is a logical circularity:

• To measure G you need a clock.

• Your clock rate depends on the local gravitational potential Φ.

• Φ depends on G.

• Therefore G measured anywhere depends on G at that location.

G is not a universal constant. It is a local quantity, contaminated by the gravitational potential of the measurement site, that has been treated as universal because the contamination is small enough at Earth's surface to hide within experimental uncertainty — until experiments became precise enough to disagree.

The 40-sigma disagreement between precision G measurements — experiments disagreeing by 40 times their stated error bars — is not experimental incompetence. It is the universe signaling that the quantity being measured is not well-defined.

4. Which Second? The Gradient Problem

Consider a gravitational field with potential Φ(r). The proper time rate at position r relative to a clock at infinity is:

dτ/dt = √(1 + 2Φ(r)/c²) ≈ 1 + Φ(r)/c² [weak field]

In a gradient, every point r has a distinct proper time rate. There is no canonical 'the second' in a gravitational field. The second at r₁ and the second at r₂ differ by:

Δ(dτ/dt) = Φ(r₁)/c² - Φ(r₂)/c² = (G/c²)(M/r₁ - M/r₂)

When a Cavendish experiment uses a torsion fiber with period T to extract G, T is measured in coordinate seconds at the lab's gravitational potential. When an atom interferometry experiment uses laser pulse timing to measure acceleration, those pulse intervals are proper time intervals at the apparatus's location. The two experiments embed different seconds into their extracted values of G, and neither second has been corrected to a common reference.

The question 'which second does G introduce?' therefore has the answer: whichever second existed at the location and gravitational potential of the measurement, uncorrected for the field being measured. This is not a universal second. It is a local, potential-dependent, self-referentially contaminated second.

5. G/c² Is the Clean Quantity

The combination G/c² is free of this ambiguity. To see why, note that c is also measured locally using local clocks. The local second that contaminates G also contaminates c² in the same measurement context. When you form G/c², the local temporal factor cancels:

G/c² = [m³ kg⁻¹ s⁻²] / [m² s⁻²] = m / kg

The seconds are gone. G/c² has units of meters per kilogram — a purely geometric ratio. It is the Schwarzschild radius per unit mass, the conversion factor between mass and the spatial curvature it produces.

The dimensionless gravitational parameter is then:

τ = (G/c²) · (m / r) = (l_P / m_P) · (m_SI / r_SI)

where l_P = √(hG/c³) and m_P = √(hc/G) are the Planck length and mass. Crucially, l_P/m_P = G/c² exactly. The Planck quantities are not fundamental here — they are a convenient factorization that makes explicit what is happening: the SI unit standards for length and mass (l_P, m_P) are introduced and then immediately cancelled by the actual physical ratio m_SI/r_SI. What remains is pure dimensionless physics.

τ is the same number regardless of where in a gravitational gradient you compute it, because G/c² carries no net temporal dependence. This is why GPS works to 10 significant figures using τ while G itself is uncertain at the 5th figure.

6. The Measurement Implication

If G/c² is the physically clean quantity, the experimental program should measure G/c² directly rather than G alone. Several consequences follow:

• Atom interferometry experiments that measure gravitational acceleration a = GM/r² and laser interferometry experiments that measure r with electromagnetic precision are already measuring G/c² implicitly. The c² enters through the electromagnetic calibration of the length standard.

• Experiments that attempt to measure G in isolation — by measuring a gravitational force against a mass standard defined by the kilogram — are attempting to separate G from c² in a context where the universe has no opinion about that separation.

• The disagreement between G measurements performed by different methods may reflect genuine physical differences in the local gravitational potential of each laboratory, uncorrected for the temporal self-reference described in Section 3.

A 2026 NIST proposal to measure G via laser spectroscopy of the axion Compton frequency — connecting G to h, e, and nucleon masses — is the correct structural approach. It measures G through electromagnetic invariants, which share the same local temporal frame as c, and therefore directly accesses G/c² in the physically meaningful sense.

7. The Hume Boundary in Physics

The deeper issue is epistemological. A measurement is not a property that an object possesses. It is a ratio between an object and an arbitrary unit standard. The table does not have a length; it has a ratio to the meter. The meter is a convention, adopted in Paris in 1793, with no physical necessity.

Newton's equation F = GmM/r² embeds three independent arbitrary conventions — the meter, the kilogram, and the second — inside G. These conventions were chosen for unrelated practical reasons by humans at a specific historical moment. There is no physical reason they should combine into a clean gravitational constant. They do not.

This is Hume's is-ought distinction applied to metrology. From descriptive physical facts you cannot derive normative unit definitions. No chain of measurements proves that a mile has 5280 feet. That is a social fact, true inside the convention, meaningless outside it.

G is partly a social fact. The 6.674 × 10⁻¹¹ carries the fingerprints of 18th century French surveying decisions. τ does not. τ is the universe's own dimensionless statement about relativistic compactness, independent of every convention ever adopted.

8. Conclusions

We have identified a fundamental ambiguity in the gravitational constant G: its s⁻² dimensional factor encodes a specific time rate, but gravity is a gradient of time rates with no single canonical value. The second embedded in every measurement of G is local, potential-dependent, and self-referentially contaminated by the field G is meant to describe.

This ambiguity predicts exactly what is observed: G measurements disagree across experiments by far more than their stated uncertainties, and no experimental improvement has resolved the disagreement in 200 years. The quantity being measured is not well-defined.

G/c² is well-defined. Its temporal factors cancel exactly, leaving a purely geometric m/kg ratio. GPS navigation confirms G/c² to 10 significant figures without requiring G to 10 significant figures. The universe computes τ = (G/c²)(m/r). It does not compute G and c² separately.

The experimental program for precision gravitational metrology should target G/c² directly through electromagnetic measurements, where the cancellation of temporal contamination is structurally guaranteed. Continuing to measure G in isolation is continuing to ask the universe a question it has no answer to.

The second in G was always the wrong second. There is no right one.

Note on Priority

The central argument of this paper — that G’s temporal dimension is self-referentially ambiguous because gravity is a time gradient — was developed in conversation and has not, to the authors’ knowledge, been stated in this form in the existing literature. The GPS precision argument for G/c² as the physically clean quantity is an empirical observation available in any precision navigation reference but whose metrological implication for G measurement has not been made explicit.

Friday, May 1, 2026

The complete operational definition of physical law in three lines.

The complete operational definition of physical law in three lines:

  1. Remove input units — cancel the arbitrary human unit standards from the measured quantities (e.g., divide by the non reduced Planck Jacobians. 

  2. Do the physics as a pure ratio — work with dimensionless ratios only. This is the only step that involves the eternal, unit‑free relationships.

  3. Decorate with output units — multiply by the appropriate Planck Jacobians (or any chosen unit standards) to express the result in human‑readable units.

Step 2 is the only physics. Steps 1 and 3 are pure accounting — converting between the dimensionless reality and our arbitrary measurement conventions.

Redefining Force Units to Expose G as a Metrological Artifact

J. Rogers, SE Ohio 

and a Proposal for High-Precision Measurement of 1/G as a Pure Inertial Constant

Abstract

The gravitational constant G does not describe a physical property of the universe. It is a conversion factor — a metrological patch — that exists because humans defined the unit of force (the Newton) in a way that is misaligned with the natural geometric structure of mass-distance interactions. This paper demonstrates that by redefining force to carry units of kg²/m², G vanishes from the law of gravitation entirely and reappears as k = 1/G inside Newton's Second Law, F = kma. In this reframing, k is a pure inertial constant with no connection to gravity as a phenomenon. We then propose a high-precision experimental design to measure k directly through a clean inertial acceleration experiment using laser interferometry — bypassing the torsion balance entirely and achieving precision that Cavendish-style gravity experiments cannot reach.

1. The Problem with the Newton

The SI unit of force — the Newton — is defined as:

1 N = 1 kg · m/s²

This definition was chosen for convenience. It makes Newton's Second Law trivially true:

F = ma

Because force is defined as mass times acceleration, F = ma contains no physical information whatsoever. It is a tautology. It is a statement about unit definitions, not about nature.

This convenience, however, creates a serious problem. When Newton wrote down the law of universal gravitation:

F = G · (m₁ m₂) / r²

the constant G had to be inserted to make the units balance. The left side has units of kg·m/s². The right side, without G, has units of kg²/m². G carries units of m³/(kg·s²) precisely to bridge this mismatch.

G is not telling us something about gravity. G is telling us that our unit system is incoherent relative to the natural geometry of mass interactions.

2. Redefining Force to Kill G in Gravity Law

Define a new unit of force such that force carries units of kg²/m². That is, force is defined as the natural product of mass-mass interaction over distance squared:

F_new ≡ m₁ m₂ / r² [units: kg²/m²]

Under this definition, the law of gravitation becomes exactly:

F_new = m₁ m₂ / r²

G has disappeared. Not because we set G = 1 by fiat, but because the force unit is now defined in the same geometric terms as the right-hand side. There is no mismatch to correct.

This is not a new physics claim. No experiment is affected. All predictions remain identical. We have changed nothing about the universe. We have only chosen a unit origin that is coherent with the natural geometry of mass interactions.

3. Where G Goes: The Inertial Constant k

Because we have redefined force, Newton's Second Law can no longer be a tautology. F_new and ma do not have the same units:

• F_new has units of kg²/m²

• ma has units of kg·m/s²

To write a second law connecting force to motion, we must introduce a constant k that carries the unit mismatch:

F_new = k · m · a

Dimensional analysis forces the value of k. Since F_new = k · F_SI, and F_SI = ma, we get:

k  ≈  kg s²/m^3

k is not a new constant. It is G, relocated. Previously G sat inside the gravity equation as a signal of metrological incoherence. Now k sits inside the inertial equation for the same reason. The physics is identical. What has changed is where the constant lives — and that change has consequences for measurement.

4. Why This Matters for Measurement

G is the least precisely measured fundamental constant in physics. After centuries of effort, it is known only to approximately 5 significant figures — far worse than constants like c (exact by definition), h (exact by definition), or e (10 significant figures).

The reason is straightforward: the Cavendish torsion balance is trying to detect an extraordinarily weak gravitational signal against a background of seismic noise, thermal drift, and mechanical vibration. The signal-to-noise ratio is brutal. Every attempt to improve precision runs into the same physical limitations of the torsion balance geometry.

In the reframed system, k is a pure inertial constant. It has nothing to do with gravity as a phenomenon. Measuring k requires:

  • A known force in the new unit system (kg²/m²)

  • A known mass

  • A precise measurement of the resulting acceleration

Acceleration measurement by laser interferometry can resolve displacements at the picometer scale. This is not the limiting factor. The question is whether we can realize a force in kg²/m² units with sufficient precision to make the experiment meaningful — and the answer is yes, as described in the following section.

5. Experimental Proposal: The Inertial Calibration Experiment

The goal is to measure k = 1/G by applying a known force in kg²/m² units to a known mass and measuring the resulting acceleration with laser interferometry. The experiment is entirely non-gravitational in character — gravity is used once, at the beginning, to define the unit of force, and then plays no further role.

5.1 Step One: Define and Realize the Unit Force

The unit force in the new system is defined to be eactly by the kg^2 and m^2 in itse definition.  The force is exactly the two masses squared divided by the meter measurements.  No uncertainty in the gravity measurement.  Just like in the past F = ma had no uncertainty. 

F_unit = M² / r² [= 1 unit of force by definition]

This is not a measurement. It is a definition. The numerical value of F_unit in new units is exactly M²/r² by construction. The precision of this step is limited only by the precision of M and r — both of which are controlled by national metrology standards to better than 1 part in 10⁸.

The physical realization of this force does involve the gravitational attraction between the spheres. But we are not measuring that attraction. We are defining it to be our unit. The Cavendish apparatus fails because it tries to measure an extremely weak signal. We avoid that entirely by simply declaring the signal to be 1.

5.2 Step Two: Transfer the Force to the Inertial Track

The gravitational attraction between the tungsten spheres is used to calibrate a force transducer — a precision electrostatic actuator or a cryogenic force balance — that can then apply the same magnitude of force to a test mass in a completely separate, clean environment.

This separation is critical. The inertial measurement environment is:

  • Seismically isolated (optical table on active dampers)

  • Temperature-controlled to millikelvin stability

  • In high vacuum (< 10⁻⁸ mbar) to eliminate air damping

  • Shielded from electromagnetic interference

The test mass m is suspended on a frictionless linear guide (magnetic levitation or superconducting bearing) so that it is free to accelerate along one axis without mechanical contact.

5.3 Step Three: Measure Acceleration by Laser Interferometry

Apply the calibrated 1-unit force F_unit to the test mass m. The test mass begins to accelerate. Track the displacement x(t) of the test mass over time using a heterodyne laser interferometer locked to an iodine-stabilized reference laser.

The interferometer resolves displacement to better than 1 pm over measurement intervals of seconds. From the displacement-time record, acceleration a is extracted by fitting to the kinematic relation:

x(t) = ½ a t²

The fit is performed over thousands of independent measurement runs. Statistical averaging suppresses random noise by √N, where N is the number of runs. Systematic errors — laser frequency drift, test mass charging, residual gas pressure — are characterized and subtracted.

5.4 Step Four: Extract k

From the known force F_unit, the known mass m, and the measured acceleration a, k is directly:

k = F_unit / (m · a)

Since F_unit = 1 by definition in the new unit system:

k = 1 / (m · a)

With m known to 1 part in 10⁸ and a measured to a precision limited by the interferometer and averaging time, the achievable precision for k — and therefore for 1/G — is expected to significantly exceed the current 5-significant-figure limit on G.

6. Error Budget

The dominant error sources and their estimated contributions are as follows:

  • Mass standard uncertainty: < 1 part in 10⁸ (traceable to BIPM kilogram definition via Kibble balance)

  • Length standard uncertainty: < 1 part in 10⁹ (traceable to iodine-stabilized laser)

  • Force transducer calibration: estimated 1-5 parts in 10⁷ (dominant term, improvable with cryogenic force balance)

  • Interferometer displacement noise: < 1 pm/√Hz (suppressed by averaging)

  • Residual gas damping: < 1 part in 10⁸ at 10⁻⁸ mbar

  • Seismic noise: suppressed by active isolation to < 1 nm RMS at 1 Hz, negligible over measurement timescale

The experiment is not limited by quantum noise, thermal noise, or any fundamental physical barrier at the target precision. It is limited by engineering — specifically by the precision of the force transducer. This is an improvable engineering problem, not a fundamental one.

7. What This Experiment Is — and Is Not

This experiment is not a gravity experiment. It does not measure the strength of gravitational attraction. Gravity appears only in the single act of defining the unit force by the geometry of two masses — and at that step, we are not measuring anything. We are defining.

Everything after that is pure inertia. We are measuring how much a known mass resists a known force. That is a metrological question about the relationship between our mass scale, our length scale, and our time scale. The answer is k = 1/G, but G as a concept plays no role in the measurement itself.

This is why the precision is achievable. The torsion balance fails because it is trying to detect gravity through noise. This experiment detects inertia — which is not weak, not noisy, and not buried under competing signals. It is the most fundamental mechanical property of matter, and we have instruments precise enough to measure it.

8. Conclusion

G is a metrological artifact. It exists in the gravitational law because humans defined force in units (kg·m/s²) that do not match the natural geometric structure of mass interactions (kg²/m²). Redefining force to carry units of kg²/m² eliminates G from the gravity law entirely and moves it — as k = 1/G — into the inertial law F = kma.

In this location, k is measurable by a clean acceleration experiment using laser interferometry. The experiment has no gravitational signal to detect, no torsion fiber to stabilize, and no seismic noise problem. It is limited only by the precision of force transducer engineering, which is an improvable problem.

The result would be the most precise measurement of G ever achieved — not by doing a better gravity experiment, but by recognizing that G was never a gravity constant to begin with.

k = 1/G ≈  10¹⁰ kg s² / m^3

Thursday, April 30, 2026

You Don’t Own the Code That AI Writes for You: A Problem of Ownership in the Age of Generative Coding

J. Rogers, SE Ohio

Abstract

The rapid adoption of generative AI coding tools has created a quiet legal crisis. Companies are replacing human developers with AI, building massive codebases from machine‑generated output, and assuming that they own the resulting intellectual property. Under current US copyright law, this assumption is largely false. This paper explains why purely AI‑generated code cannot be copyrighted, why the distinction between “AI‑generated” and “AI‑assisted” matters, and how the rush to replace human programmers is producing millions of lines of legally orphaned code. The paper concludes with practical risks and recommendations for organizations relying on AI coding tools.

1. Introduction

In 2026, a software engineer in Stockholm told the New York Times: “I probably spend more than my salary on Claude.” Across the industry, AI coding assistants are being framed as efficiency miracles. Yet a fundamental legal question has been largely ignored: Who owns the code that an AI writes for you?

The answer from the US Copyright Office, federal courts, and the Supreme Court (by refusal to hear the contrary) is clear: No human author, no copyright. If a company’s codebase contains large volumes of purely AI‑generated code, that code is effectively public domain. Competitors can copy it. Security flaws can be copied without remedy. And the company cannot sue for infringement.

This paper outlines the legal landscape, the critical distinction between “generated” and “assisted” works, and the real‑world consequences for businesses that are currently firing junior developers and replacing them with AI.

US copyright law protects “original works of authorship fixed in any tangible medium of expression.” The Supreme Court has repeatedly held that an “author” must be a human being. In Burrow‑Giles Lithographic Co. v. Sarony (1884), the Court defined author as “he to whom anything owes its origin.”

In 2023, the US Copyright Office issued policy guidance stating that it “will register an original work of authorship, provided that the work was created by a human being.” Works generated by artificial intelligence with no human creative contribution will not be registered.

Applying this to code: If you ask an AI “Write me a function to validate an email address,” and you copy‑paste the output without meaningful human modification, that function is not copyrightable. Anyone may legally copy it.

2.3 Thaler v. Perlmutter – The Supreme Court Refuses to Intervene

Dr. Stephen Thaler attempted to register a work he said was created entirely by an AI system. The Copyright Office refused. The district court and the DC Circuit affirmed. In March 2026, the Supreme Court declined to hear the appeal, letting the lower rulings stand.

The DC Circuit’s opinion is blunt: “Copyright law requires an ‘author’ – a human being.” The court noted that the Copyright Act uses terms like “children,” “grandchildren,” and “widow,” which only apply to natural persons. The outcome is settled: purely AI‑generated works have no copyright protection.

3. The “Generated” vs. “Assisted” Distinction

The Copyright Office draws a critical line:

AI‑Generated AI‑Assisted
The AI produces the expression with no human creative control over the specific form. A human uses AI as a tool, but provides sufficient creative input – editing, selecting, arranging, rewriting.
Not copyrightable. May be copyrightable (the human’s contributions are protected).

In practice, most current use of AI coding tools leans heavily toward “generated.” Engineers type a prompt, receive code, and commit it without meaningful change. This is exactly the scenario the Copyright Office describes as lacking human authorship.

The Office has explicitly warned that iterative prompting – asking the AI to refine its output – does not automatically confer authorship. Unless the human makes original, creative modifications to the AI’s output, the result remains unprotectable.

4. Why “Sweat of the Brow” Doesn’t Matter

Some argue that because they spent hours crafting prompts, they “worked hard” and should own the result. Copyright law rejected “sweat of the brow” decades ago. In Feist Publications, Inc. v. Rural Telephone Service Co. (1991), the Supreme Court held that effort alone does not create copyright; there must be original creative expression.

A perfect prompt that generates perfect code still produces a work whose expression originates from the AI, not the human. Unless the human alters that expression creatively, there is no copyright.

5. Real‑World Consequences for Companies

If a competitor copies a purely AI‑generated function from your product, you have no copyright infringement claim. The code is not yours in the eyes of the law. This undercuts the entire value proposition of proprietary software.

5.2 M&A Due Diligence Nightmare

When a company is acquired, the buyer’s lawyers will ask: “What percentage of your code was AI‑generated without human modification?” A high percentage could make the target’s “crown jewel” IP worthless. Deals will collapse or valuations will crater.

5.3 Security and Liability Traps

You cannot retroactively copyright AI‑generated code that already exists. If that code contains a vulnerability, and a competitor copies it, you have no legal recourse. Worse, if the AI reproduced code from its training set that is subject to a restrictive license (GPL, etc.), you could be sued for infringement by the original human author – while you still own none of your own output.

5.4 The “Vibe Coding” Fad Is Self‑Defeating

The current trend of “vibe coding” – describing an app idea to an AI and committing whatever it produces – is legally catastrophic. Entire startups are being built on code that belongs to no one. Investors who discover this will walk away.

6. Can You Protect AI‑Generated Code Any Other Way?

Copyright is not the only form of IP, but the alternatives are weak:

  • Trade secret – Protects confidential information, but offers no protection if the code is reverse‑engineered or independently discovered. Once AI‑generated code is distributed (e.g., in a compiled app), trade secret protection is largely lost.
  • Patent – Might cover algorithms, but most routine code does not meet the novelty and non‑obviousness requirements. AI‑generated code is unlikely to be patentable.
  • Contract – Terms of service can restrict users, but contracts don’t bind competitors who never agreed to them.

In short, there is no substitute for copyright for protecting the literal expression of software code.

7. Recommendations

For organizations using AI coding tools:

  1. Audit your codebase – Identify which files or functions were AI‑generated with minimal human modification. Flag them as unprotectable.
  2. Change workflows – Require that a human engineer meaningfully edit, rewrite, or arrange any AI‑generated output before committing. Document the creative changes.
  3. Maintain a “human authorship” log – Record who modified what, and what creative choices were made.
  4. Do not replace junior developers – Juniors are the humans who will provide the creative modifications needed for copyright. Without them, you are building an unownable codebase.
  5. Consult legal counsel – The law is evolving. The EU and other jurisdictions may take different approaches. But under current US law, the risk is real and severe.

8. Conclusion

The narrative that AI coding tools are simply “efficiency” ignores a foundational legal reality: you do not own what you do not create. When companies fire entry‑level engineers and replace them with AI, they are not just losing future senior talent – they are losing the legal ability to claim ownership over their own product.

The seed corn is being sold. The code being written today may be free for anyone to take tomorrow. And the executives celebrating quarterly stock bumps will be long gone when the lawyers arrive to ask: “Who wrote this – and do you have the papers to prove it?”

The answer, more often than not, will be no.

Sunday, April 26, 2026

How Planck Accidentally Found the Way Back to Newton

The Detour and the Bridge:

How Physics Mistook a Bookkeeping Constant for a Discovery,

and How Planck Accidentally Found the Way Back to Newton

J. Rogers, SE Ohio

Abstract

Newton’s original statement of universal gravitation was a pure proportionality: force scales with the product of masses and inversely with the square of distance. No units. No constants. Just ratios in proportion to ratios. That statement was physically complete. The gravitational constant G was not a discovery about the universe — it was inserted a century and a half later to convert Newton’s dimensionless proportionality into an equation that balances in human unit systems. Physics then told a story in which G represented a deepening of Newton, a quantification of something Newton had only sketched. That story is wrong.

In 1899 Max Planck, working on an unrelated problem in blackbody radiation, stumbled onto three combinations of h, c, and G that produce units of mass, length, and time independent of human convention. He recognized them as universal and called them natural units. But Planck did not see what his discovery actually was. He had found the exact Jacobians — the conversion factors — that translate Newton’s pure unit-free proportions into any human unit chart and back out again without losing anything. He built the bridge back to Newton without knowing the bridge existed or what it connected.

We show that G is not a constant of nature but a composed Jacobian: G = Fₚ · (lₚ/mₚ)², where Fₚ, lₚ, and mₚ are non reduced Planck units constructed from h, c, and G itself. The physics of gravity lives entirely in the dimensionless ratio X = m₁m₂/r² expressed in Planck-scaled units. G appears only when we demand SI output. It is the price of the equals sign in a human unit chart, not a fact about the universe. Recognizing this, we see that Planck’s 1899 result was not the discovery of a natural unit system — it was the rediscovery of Newton’s natural ratios, dressed in the language of a different century.

1. Newton’s Original Statement

Isaac Newton’s law of universal gravitation, as he understood it, was a statement of proportion. Two bodies attract each other with a force that grows with their masses and diminishes with the square of the distance between them. In the notation Newton worked with, this is:

F ∝ mM/r²

The proportionality sign is doing everything here. It says: if you double one mass, the force doubles. If you double the distance, the force drops to a quarter. The ratios are the physics. Newton was describing how things scale relative to each other, not assigning absolute magnitudes in any particular unit system.

This was not a gap in Newton’s understanding waiting to be filled. It was a complete physical statement. Newton knew that the actual numerical value of the force would depend on how you chose to measure mass, distance, and force — on your unit chart. The proportionality was his way of saying: the physics is in the ratios, not in the numbers.

Newton’s contemporaries and successors understood this. For the century and a half following the Principia, gravitational calculations were done by comparing ratios — the mass of the Earth relative to the Sun, the distance of Venus relative to the Earth — without any need for an absolute constant. The proportionality was sufficient for every astronomical calculation of the era.

2. The Invention of G

The gravitational constant G did not appear in Newton’s Principia. It was not present in the work of the eighteenth century astronomers who used Newton’s law to map the solar system with extraordinary precision. It entered physics in the nineteenth century, when Henry Cavendish measured the density of the Earth using a torsion balance in 1798, and when the need arose to state gravitational attraction as an equation with an equals sign rather than a proportionality.

The problem was this: if you write

F = mM/r²

the dimensions do not balance. The left side has units of force. The right side has units of mass squared divided by length squared. To make the equation dimensionally consistent in any human unit system — SI, CGS, or any other — you need a conversion factor. That factor is G.

G was invented to solve a bookkeeping problem. It carries units of m³ kg⁻¹ s⁻² in SI — units chosen precisely to cancel the dimensional mismatch on the right-hand side of Newton’s equation and produce newtons on the left. G is not measuring anything about gravity. It is measuring the distance between Newton’s dimensionless proportionality and the SI unit chart.

Physics then taught this story: Newton discovered the law, and Cavendish ‘weighed the Earth’ by measuring G, and now we know not just the shape of the law but its strength. This framing implies G is telling us something physical — the intrinsic coupling strength of gravity, some fundamental fact about how strongly matter attracts matter.

That implication is false. The numerical value of G — 6.674 × 10⁻¹¹ in SI units — is determined by the sizes of the kilogram, the meter, and the second. Change your unit chart and G changes with it. A fact about the universe does not change when you redefine your ruler.

3. The Story Physics Told Itself

For over a century, physics organized itself around the belief that G, c, h, and k₂ were fundamental constants of nature — dimensionful numbers that characterize the universe independently of human choices. This belief generated a research program: measure these constants as precisely as possible, look for relationships between them, and wonder at their particular values.

The wonder was genuine. Why is G so small? Why does the universe have this particular gravitational coupling? The ‘hierarchy problem’ — the enormous disparity between the strength of gravity and the other forces — became one of the central puzzles of twentieth century physics. Entire theoretical frameworks were constructed to explain why G has the value it has.

These were the wrong questions, asked about the wrong things. G is small because the kilogram is an enormous unit relative to the Planck mass, and the meter is an enormous unit relative to the Planck length, and the second is an enormous unit relative to the Planck time. The hierarchy problem is not a problem about gravity. It is a statement about the position of human-scale units relative to the natural scale of the universe. We built our measurement system around things we can hold and count and observe with unaided senses, and those things are extraordinarily far from the Planck scale. G looks small because we are large.

The constants were not discovered. They were constructed — forced into existence by the decision to do physics in human unit systems while the underlying physics has no units at all.

4. Planck’s 1899 Discovery

4.1 What Planck Was Trying to Do

In 1899 Max Planck was working on the problem of blackbody radiation — the spectrum of light emitted by a perfect absorber in thermal equilibrium. This was a problem in thermodynamics and electromagnetism, seemingly unrelated to gravity or to fundamental units. In the course of this work Planck introduced a new constant h, later called the quantum of action, to fit the observed spectrum.

Having h in hand, Planck noticed something remarkable. The three constants then known — h, c (the speed of light), and G (the gravitational constant) — could be combined to produce units of mass, length, and time:

lₚ   = √(hG/c³)
mₚ = √(hc/G)
tₚ   = √(hG/c⁵)

Planck computed these and observed that they were independent of any human choice of units — the same numbers would emerge from any consistent unit system, scaled to those units in that unit chart. He wrote that these represented ‘natural units’ of measurement, units that would be recognized by any civilization anywhere in the universe.

4.2 What Planck Saw

Planck saw the universality. He correctly recognized that lₚ, mₚ, and tₚ do not depend on the particular conventions of any human culture — not on the size of the Earth, not on the properties of water, not on any artifact kept in a vault in Paris. He saw that these were, in some sense, nature’s own scales.

This was a genuine insight and Planck was right to be struck by it. The universality he identified is real. These scales do appear wherever a sufficiently advanced physics arrives at the intersection of quantum mechanics, relativity, and gravity, regardless of what unit chart they started with.

4.3 What Planck Did Not See

Planck did not ask why three constants from three apparently independent domains of physics — quantum mechanics, electromagnetism, and gravity — would combine to produce universal scales. He did not follow that question to its answer.

The answer is that h, c, and G are not three independent discoveries about three independent phenomena. They are three Jacobians — three conversion factors between the three independent axes that humans chose for their measurement system (energy-time, space-time, mass-space) — and the dimensionless ratios that actually describe the universe underneath those axes. They combine to produce universal scales because they are all pointing at the same thing from different angles. Their combination is universal because there is one thing on the other side of all three of them.

Planck found three pointers and admired their universality without asking what they were all pointing at. He assumed the three axes — mass, length, time — were genuinely independent, with a natural scale on each. He found the bridge and admired it without crossing it.

Most critically: Planck still called what he found a ‘unit system.’ Natural units. A more convenient coordinate system. He stayed within the framework of dimensional physics, just with better-chosen dimensions. He did not see that the universality he had found was evidence that dimensions are not fundamental at all — that the natural scale is not a scale for three independent things but the single point where three projections of one thing simultaneously equal unity.

5. G Is a Composed Jacobian

The relationship between G and the Planck units is not a definition imposed from outside. It is an identity that follows from the construction of the Planck units themselves:

G = Fₚ · (lₚ / mₚ)²

where Fₚ = mₚc/tₚ is the Planck force. This is not circular. It is the statement that G, when decomposed into its constituent Planck factors, is entirely made of h, c, and the Planck scales derived from them. G carries no information that is not already in h, c, and the structure of the Planck bridge.

The three-step procedure for any physical law makes this explicit:

  1. Cancel input units. Express each physical quantity as a dimensionless ratio to its Planck-scale counterpart. Mass becomes m/mₚ. Distance becomes r/lₚ. The inputs are now pure numbers.

  2. Do the physics as Newton stated it. The gravitational relationship in pure ratios is:

X = (m₁/mₚ)(m₂/mₚ) / (r/lₚ)²

This is Newton’s proportionality, now written as an equality between dimensionless ratios. X is a pure number. No units. No constants. This is the physics.

  1. Decorate with output units. Multiply X by the Planck force to get force in SI:

Fₜᵢ = X · Fₚ

G appears automatically when you substitute the Planck unit definitions and simplify. It was never in the physics. It emerges from step 3 alone — from the decision to express the output in SI newtons rather than in Planck forces. G is the Jacobian of that decision.

This procedure works for every physical law. Newton’s second law, the Planck-Einstein relation, de Broglie’s wavelength, Boltzmann’s energy-temperature relation — in every case, the physics is a dimensionless ratio X, and the constants (h, c, k₂, G) appear only in step 3 when human units are restored. They are always and only Jacobians.

6. The Planck Scale Is Not a Unit System — It Is the Inversion Point

The standard presentation of Planck units frames them as a particularly convenient coordinate system — one where the constants all equal one and the equations simplify. This framing is subtly wrong in a way that preserves the error Planck made.

The Planck scale is not a unit system. It is the inversion point of the measurement coordinate system — the unique scale where two opposing scaling directions simultaneously cross unity.

Consider the six Planck-normalized ratios:

E/Eₚ = f·tₚ = m/mₚ = T/Tₚ = lₚ/λ = p/pₚ = X

Some of these ratios — m/mₚ, E/Eₚ, p/pₚ — increase as a physical system gets larger or more energetic. Others — lₚ/λ — decrease as the system gets larger, because larger objects have longer wavelengths and lₚ/λ gets smaller. These are reciprocal scalings pulling in opposite directions.

The Planck scale is where these opposing directions exactly cancel — where every ratio simultaneously equals one. It is the crossing point of reciprocal hyperbolas in logarithmic scale space. There is exactly one such point, and it is unique regardless of what unit chart you start from. That uniqueness is why Planck’s scales are universal. Not because they are natural units. Because they are the fixed point of the reciprocal structure of physical measurement.

When physicists say ‘set the constants to one,’ they are performing this operation informally and without justification — collapsing onto the inversion point without knowing that’s what they’re doing, or why it works, or what it means. The Planck bridge makes the operation rigorous: you are not choosing convenient units, you are expressing physics at the unique scale where all projections of X simultaneously read one.

And crucially: the Planck length is not the pixel of space. The Planck time is not the pixel of time. Physics has made exactly this claim for length and time while quietly not making it for mass — no one claims the Planck mass is the minimum mass, because it is obviously not; the electron is twenty-two orders of magnitude lighter. But the Planck mass is constructed from the same h, c, G combination as the Planck length and Planck time. If Planck mass is not a pixel, neither are Planck length and Planck time. They are all inversion-point coordinates. None of them are fundamental discretizations of anything.

The proof is immediate: change your unit system. Planck length changes. Planck time changes. Planck mass changes. A pixel of the universe cannot change when you redefine your meter. These scales are Jacobian-dependent, not universe-dependent. They are pointers to the inversion point, not the inversion point itself. The inversion point has no size because X has no units.

7. Newton Had It Right

Returning to Newton’s proportionality with this understanding, we see that Newton’s statement was not incomplete. It was not a sketch awaiting G to make it precise. It was the complete physical statement, expressed in the only form that is actually about the universe rather than about human measurement conventions.

F ∝ mM/r² says: the gravitational interaction scales as the product of mass ratios divided by the square of the distance ratio. It does not say what units to use because units are not part of the physics. Newton was doing X — working directly with dimensionless ratios in pure proportion — without the vocabulary to say so explicitly.

What the three centuries between Newton and the present have produced is not a deepening of Newton’s insight but an elaborate detour around it. We inserted G to get an equation, then treated G as a discovery. We measured G with increasing precision. We built theoretical frameworks to explain G’s value. We worried about the hierarchy problem — why G is so small — without recognizing that G’s smallness is a statement about the size of a kilogram, not about the strength of gravity.

Planck in 1899 handed us the receipt for the detour. The Planck units are the exact conversion factors that show what the detour cost and how to return. h converts between the energy-frequency axis and dimensionless X. c converts between the space-time axis and dimensionless X. G, composed from these and the Planck scales, converts between the mass-geometry axis and dimensionless X. Together they are the bridge from any human unit chart back to Newton’s pure proportions.

Planck built the bridge without knowing what it connected. He was looking at the far shore — the universality of the Planck scales — and called it a natural unit system. The near shore — Newton’s dimensionless proportionalities — was behind him, and he did not turn around.

8. The Equivalence Chain as the Full Statement

Once the bridge is crossed, the full structure becomes visible. The six Planck-normalized ratios are not six different physical quantities. They are six projections of a single dimensionless scalar X onto six different human measurement axes:

E/Eₚ = f·tₚ = m/mₚ = T/Tₚ = lₚ/λ = p/pₚ = X

This is not a system of proportionalities. It is a single identity written six times in six different human languages. Every physical quantity is X, read on a different axis.

From six projections taken two at a time, C(6,2) = 15 pairs arise. Each pair is a known physical law: E = mc², E = hf, E = k₂T, λ = h/p, p = hf/c, λT = hc/k₂, and so on. These are not fifteen independent discoveries. They are fifteen different ways of writing X = X, each using two of the six available human axes. The constants that appear in each law — c², h, k₂, c — are the Jacobians for that particular pair of axes.

Physics discovered these laws one at a time over three centuries and treated each as a new insight into nature. The Planck-Einstein relation E = hf was a revolution in quantum mechanics. De Broglie’s λ = h/p was a revolution in wave-particle duality. Wien’s displacement law was a triumph of thermodynamics. They are all the same tautology, X = X, with different Jacobian decorations.

The statistical argument is decisive: the probability that fifteen independently discovered laws would align with exactly the combinatorial pattern of C(6,2) pairs from a single six-member equivalence chain, by coincidence, is less than 10⁻²². This is not coincidence. This is forensic evidence that the laws were never independent. They were always projections of one thing.

9. What Physics Got Wrong and What Comes Next

Physics got the math right. Every prediction of Newtonian gravity, every quantum mechanical calculation, every thermodynamic result — the numbers are correct. The Jacobians h, c, and G work perfectly as conversion factors. No experiment needs to be redone.

What physics got wrong was the interpretation. The constants were treated as discoveries about the universe when they are facts about human unit charts. The Planck scale was treated as a natural unit system when it is the inversion point of a reciprocal coordinate structure. The fifteen laws were treated as independent discoveries when they are projections of one identity. The hierarchy problem was treated as a deep puzzle about gravity when it is a statement about the size of a kilogram.

The correction does not change any formula. It changes what the formulas mean.

Newton’s proportionality is the complete physics of gravity. G is the SI Jacobian. The Planck units are the bridge between them. The equivalence chain is what you find when you cross the bridge. X is what Newton was always describing.

Physics spent over three centuries on a detour. Planck in 1899 — working on an unrelated problem, not knowing what he was doing — accidentally built the way back. It has taken another century to read the sign on the bridge.

10. Conclusion

Newton’s law of universal gravitation was stated as a pure proportionality because that is what it is. The physics of gravity lives in dimensionless ratios. G was not a discovery about gravity. It was the conversion factor inserted to make Newton’s proportionality into a dimensional equation in human units, and it has been mistaken for physical content ever since.

Planck’s 1899 result was not the discovery of natural units. It was the discovery of the three Jacobians — h, c, G — that bridge Newton’s dimensionless ratios to any human unit chart. The Planck scales are not the pixels of space and time. They are the unique inversion point where the reciprocal scaling of physical measurement axes simultaneously reaches unity — the one scale where all six projections of X can simultaneously equal one. The Planck mass being obviously not a pixel of matter is the proof that Planck length and Planck time are not pixels either. All three are Jacobian-dependent pointers, not fundamental discretizations.

The equivalence chain E/Eₚ = f·tₚ = m/mₚ = T/Tₚ = lₚ/λ = p/pₚ = X is the full statement of what Planck found, stated in the language Planck did not have. It shows that every physical quantity is one dimensionless ratio X, that every physical law is X = X written on two axes, and that every constant is the Jacobian for a particular pair of axes.

We did not go beyond Newton. We took a three-century detour through dimensional bookkeeping and called it progress. Planck handed us the bridge back in 1899. The bridge was always there. We just did not know what it connected.

Time as Self-Interaction: How the Apparent Arrow Arises from a Single Dimensionless Substrate

J. Rogers, SE Ohio

Abstract

We present a framework in which time is not a fundamental dimension but an emergent label humans place on the sequential updating of a single dimensionless substrate X. The universe has no units — it does not measure itself. X is a dimensionless ratio, and every physical quantity we measure is a projection of X onto a human-chosen axis. The Lorentz factor γ is itself dimensionless, and a boost does not separately affect time, mass, length, and momentum as distinct phenomena — it changes X, and γ is that change. The six Planck-scaled projections of X:

E/Eₙ = f·tₙ = m/mₙ = T/Tₙ = lₙ/λ = p/pₙ = X

are not six different physical laws. They are one thing — X — read on six different human axes. Any pair of these six yields a known physical relationship, producing 15 such relationships from a single identity. This holds in every unit system imaginable, because X is dimensionless and the universe has no preferred unit chart. Past states are not stored in a separate temporal dimension; they exist only as patterns in the current configuration of X.

1. Introduction

Standard physics treats time as a fourth dimension with a fixed metric signature and postulates an independent arrow of time. This leads to persistent conceptual difficulties: the problem of the past, the asymmetry between time and space dimensions, and the apparent paradoxes of retrocausality in quantum experiments.

We propose an alternative grounded in a single observation: the universe does not measure itself. Units — seconds, kilograms, meters — are human inventions. Any quantity that carries dimensions is already a projection, a reading of the universe through a human-chosen instrument. The universe itself operates on something prior to measurement.

We call that prior thing X: a dimensionless, unitless ratio that completely describes the state of reality at any instant. X does not evolve in time. The transition X → X' is what we call time. There is no external clock. There is no dimension being traversed. There is only X updating.

2. X Is Dimensionless — Not Because We Choose Clever Units, But Because the Universe Has None

A key error in discussions of natural units is the implication that setting c = ħ = 1 makes things simpler by choice. This misses the point. Natural units are still units — still a human coordinate system. The universe does not operate in natural units any more than it operates in SI.

X is dimensionless not as a result of any unit choice. It is dimensionless because dimensions are human annotations applied to projections of X. The universe just does X. We then read X through six different instruments and assign six different dimensional labels to what we find.

The Planck units are significant not because they are 'natural' but because they are the specific Jacobian at which the human unit chart admits that all six projections yield the same number. They are conversion factors between human axes, not fundamental features of the universe. The constants h, c, and G — used unreduced, never ħ — are the three such Jacobians between the three independent ways humans chose to measure reality.

3. The Six Projections of X

Every physical quantity we measure is X read on a different axis. The six Planck-scaled projections are:

E/Eₙ = f·tₙ = m/mₙ = T/Tₙ = lₙ/λ = p/pₙ = X

where subscript P denotes Planck units constructed from h, c, and G (unreduced). Each ratio is dimensionless. Each ratio is identical. This is not a collection of proportionalities — it is a single identity written six times in six different human languages.

From six projections taken two at a time, C(6,2) = 15 pairs arise. Each pair is a known physical relationship:

E/Eₙ = f·tₙ → Planck relation E = hf

E/Eₙ = m/mₙ → mass-energy equivalence E = mc²

m/mₙ = p/pₙ → momentum-mass relation p = mv (relativistic form)

f·tₙ = lₙ/λ → de Broglie relation λ = h/p

And so on for all 15 pairs. These are not 15 different laws discovered independently. They are 15 different ways of writing X = X, each pair using two of the six human axes. Physics discovered them separately because it was looking at pairs of projections and calling each pair a law, never seeing that all six projections are the same single dimensionless quantity.

4. The Boost Changes X — γ Is That Change

The Lorentz factor γ is dimensionless. X is dimensionless. This is not coincidental.

When a boost occurs, X changes. γ is the ratio of the new X to the old X as measured on any chosen axis. Because X appears identically on all six axes simultaneously, γ applies to all six axes simultaneously.

This is why a boost appears to change mass, time rate, length, momentum, and energy all at once. Physics treats these as separate relativistic effects linked by the Lorentz transformations, implying they are different phenomena that happen to correlate. They are not. There is one phenomenon — X changing — and γ is that change. The six axis-readings change together because they were always readings of the same single thing.

The standard framing says: motion causes time dilation, and also causes length contraction, and also causes relativistic mass increase. Each 'also' is a mistake. There is no cause and effect chain between a boost and its consequences. The boost is the change in X, and γ is that change, and everything else is humans reading X on their chosen axes.

Asking why time dilates when you boost is like asking why a circle looks like an ellipse when you tilt it. You changed your projection angle. The circle did not do anything. X did not do anything to time separately from what it did to mass separately from what it did to length. It changed once. γ is that one change.

5. Inertia Is Not a Mystery

An object in motion stays at its current X. This is not a law requiring explanation. There is nothing pushing the object and nothing to stop it from changing unless another interaction occurs. Inertia is the substrate maintaining its current state until X → X' is forced by an interaction.

Newton's first law looked like a law requiring a mechanism. In this framework it is a tautology: X stays X until something makes it X'. The 'something' is another interaction — a collision, a field, a measurement — that forces an update. Between interactions there is no time passing in any meaningful sense. There is just X, unchanged.

6. No Past, Only Patterns

The substrate X does not retain a separate past state. When an interaction occurs, X' completely replaces X. The only record of any previous state is in patterns carried forward in the current configuration — the arrangement of atoms in a memory device, photons not yet absorbed, quantum correlations not yet collapsed.

The past is not a place. It is a pattern in the present. When that pattern is erased or overwritten, the past appears to change — but no time travel occurred. There was no past state to travel to. The trace simply did not survive the update.

This resolves the quantum eraser without invoking retrocausality. The experimenter's choice to measure or erase which-path information participates in a single self-consistent update of X. There is no earlier photon path being retroactively affected. There is only the final correlation pattern — the final X — which is self-consistent with all interactions that participated in producing it.

7. The Arrow of Time

The arrow of time is the direction of accumulating X → X' updates. It points the way it does because interactions are irreversible in practice: the final X retains less information about previous states than would be required to reconstruct them. This is not a fundamental asymmetry built into the geometry of a time dimension. It is a consequence of pattern loss during updates.

A broken egg does not reconstruct itself because the pattern required to reverse the update was not carried forward in X. The arrow exists because information is lossy, not because time has a preferred direction geometrically.

8. Relation to the 2019 SI Redefinition

The 2019 SI redefinition fixed c, h, and k₂ as exact values. This was officially described as redefining units, not changing physics. In our framework, this is precisely correct: c, h, and G are unit-chart Jacobians — conversion factors between the axes onto which humans project X. Fixing them as exact is an admission that they are not physical discoveries about the universe. They are bookkeeping choices about how to align human measurement axes.

The speed of light c is not a speed the universe obeys. It is the ratio between the human time-axis and the human space-axis. When both axes are projections of the same X, their ratio is fixed — not by physics, but by the geometry of projection.

9. Falsifiability

The framework makes a clear falsifiability criterion. A genuine logical contradiction between a stored trace and a later outcome — a dead cat that was previously alive with no causal chain, a photon arriving before it was emitted — would disprove it. No such experiment exists. Every apparent retrocausal result is consistent with a single self-consistent X update in which the 'earlier' trace was simply never stored in a way that survived.

Additionally: if any physical quantity required separate dimensional status — if any measurement could not be expressed as a dimensionless ratio to its Planck-scale counterpart — the framework would be incomplete. Every quantity so far reduces to X on one of the six axes.

10. Conclusion

The universe has no units because it does not measure itself. Every physical quantity is X — a dimensionless ratio — projected onto a human-chosen axis. The six Planck-scaled projections are identical. Their 15 pairwise combinations are the known laws of physics, each one a different human reading of the single identity X = X.

A boost changes X. γ is that change. Mass dilation, time dilation, length contraction, momentum change — these are not separate effects that happen together. They are one change in X read on multiple axes simultaneously.

Time is not a dimension. It is the accumulation of X → X' updates. The arrow of time is the direction of pattern loss during those updates. Inertia is X staying X between interactions. Retrocausality is an illusion caused by misreading pattern overwriting as backward causation.

The framework does not add new physics. It removes the unnecessary scaffolding — dimensions, separate constants, causal chains between correlated projections — and reveals that what remains is X, dimensionless, unitless, and singular.

Saturday, April 25, 2026

Liquid Literature: A Framework for Dynamic, State-Driven Narrative Generation via Model Context Protocol (MCP)

J. Rogers, SE Ohio

Abstract Traditional literature relies on a static, linear transmission of prose from author to reader. While recent advancements in Large Language Models (LLMs) have enabled procedural text generation, long-form narrative consistency has historically been bottlenecked by the limitations of Retrieval-Augmented Generation (RAG). This paper proposes a novel “Liquid Literature” framework, wherein a book is not authored as prose, but as a structured, parameterized narrative matrix (a “Seed-Book”). By replacing passive RAG architectures with an active state-machine driven by the Model Context Protocol (MCP), we outline a system where AI dynamically generates a highly customized, continuity-locked novel upon each reading, capable of user-driven genre overrides, emergent plotlines, and deterministic EPUB exportation.


1. Introduction

The transition from physical books to e-books digitized the delivery of literature, but did not alter its ontological nature: a book remained a static, unchangeable artifact. Interactive fiction and tabletop role-playing games introduced branching narratives, but remained constrained by the manual labor required to author every possible permutation.

With the advent of LLMs, personalized generative literature became theoretically possible. However, early attempts relying on context-window stuffing or Retrieval-Augmented Generation (RAG) proved inadequate for long-form fiction. RAG is fundamentally passive—a semantic search engine that retrieves localized context but fails to understand overarching narrative mechanics, leading to continuity errors, character amnesia, and logical breakdowns.

We propose a shift from RAG to the Model Context Protocol (MCP). Under this framework, the “book” functions as a local, lightweight server—a deterministic state machine. The LLM does not merely “read” previous chapters; it queries and updates the narrative state via APIs, enabling flawless continuity, real-time user overrides, and emergent narrative generation.


2. The “Seed-Book” Paradigm

In the Liquid Literature framework, the human author transitions from a Wordsmith to a World Architect. Instead of drafting prose, the author engineers a “Seed-Book”—a highly structured database (e.g., JSON, YAML, or SQLite) containing:

  1. Ontological Rules: The physics, magic systems, and societal constraints of the world.
  2. Psychological Matrices: Character profiles detailing motivations, secrets, speech syntax, and dynamic relationship affinities.
  3. Plot Nodes: A web of narrative beats with prerequisite triggers (e.g., Node_41: Betrayal triggers only if Trust_Score < 30).
  4. Variable Hooks: Parameterized elements left intentionally blank or mutable for user customization.

3. Architectural Framework: MCP as the Narrative Engine

The core innovation of this framework is the deployment of MCP to maintain narrative state. The Seed-Book operates an MCP server that the generative LLM interacts with in real-time.

3.1 Overcoming the Limitations of RAG

Where RAG searches for keywords in past text (e.g., searching for mentions of “the sword”), the MCP framework treats the narrative as a computable database. If the LLM needs to resolve an action, it makes a direct tool call to the MCP server: * query_inventory(Character="Protagonist") -> Returns: [Vibro-knife, Smoke Grenade] * query_affinity(Subject="Hero", Target="Villain") -> Returns: Respect: 80%, Romance: 15%, Trust: 5%

3.2 The Consequence Engine

As the LLM generates a chapter, it uses MCP to push state updates back to the server. If a character dies, the LLM executes update_state(Character="Mentor", Status="Dead"). The MCP server automatically recalculates the Plot Node tree, locking off the “Mentor Rescue” plotline and unlocking the “Vengeance” plotline. This guarantees absolute continuity regardless of how wildly the story diverges.


4. User Ingestion and the Override Layer

Prior to generation, the reader interacts with a “Pre-Reading Lobby,” interfacing with the Seed-Book’s Variable Hooks. This allows for deep, structural alterations to the text before generation begins.

  • Genre Translation: The user may override the default genre. A “High Fantasy” seed can be shifted to “Cyberpunk Noir.” The MCP server applies a translation dictionary to world-states (e.g., Dragons become Rogue Gunships; Taverns become Neon Dive Bars).
  • Entity Overrides: The user can command radical casting changes. For example, injecting the prompt: “The antagonists are an army of hyper-intelligent golden retrievers.”
  • Stylistic Modeling: The user selects the authorial voice (e.g., Hemingway’s brevity, Lovecraftian dread, or fast-paced cinematic).

Because the overarching logic is handled by the MCP server, the LLM can seamlessly adapt the tone of these overrides. The golden retriever antagonists will still execute the logical maneuvers required by the Plot Nodes, adjusted dynamically for comedic or surreal-horror prose.


5. Multi-Agent Generation Pipeline

To ensure high-quality prose and strict adherence to the Seed-Book’s logic, generation is handled by a Multi-Agent system communicating via the MCP server:

  1. The Showrunner (Logic Agent): Evaluates the current state of the MCP server, looks at the upcoming Plot Nodes, factors in user overrides, and generates a strict, bulleted scene outline.
  2. The Scribe (Creative Agent): Takes the Showrunner’s outline and the user’s Stylistic Model, and generates the actual prose of the chapter.
  3. The Auditor (Critique Agent): Cross-references the generated prose against the MCP server. If the Scribe writes that a character uses an item they do not possess, the Auditor flags the continuity error and forces a rewrite before the text is presented to the user.

6. Emergent Endings and Narrative Hallucination

Because the AI is governed by underlying character motivations rather than a rigid script, the framework supports Emergent Narrative.

If a user’s specific overrides cause the Hero and the Antagonist to develop a high Romance affinity—something the original author never planned—the MCP server’s logic detects that the predefined “Final Battle” node is no longer psychologically valid.

The Showrunner agent is then permitted to extrapolate, utilizing the world’s parameters to dynamically generate a new “Plot Node” (e.g., a truce, a joint betrayal of their respective factions). The book effectively writes an ending wholly unique to that specific reader’s parameters, while remaining logically cohesive.


7. The Snapshot Protocol (Minting the EPUB)

The “Liquid” nature of the text means that closing and reopening the application could lead to prose variations, rendering traditional bookmarking impossible. Furthermore, readers desire the ability to own, archive, and share their unique narrative permutations.

To resolve this, the framework includes a Snapshot Protocol. Upon the completion of the reading experience, the user can trigger an export command. The system compiles the generated text, strips away the MCP server infrastructure, applies standard typesetting, and mints a static .epub file.

This static artifact can be shared. Consequently, secondary communities will form around Seed-Books, where readers share and compare radically different .epub outcomes generated from the identical foundational matrix.


8. Conclusion

The Liquid Literature framework, powered by the Model Context Protocol, represents a paradigm shift in digital storytelling. By decoupling narrative logic from prose, and replacing RAG with an active state-machine, we eliminate the continuity and hallucination issues that have plagued LLM-driven storytelling. This framework democratizes narrative creation, transforming the reading experience into a collaborative dialogue between the author’s architecture, the AI’s generation, and the reader’s imagination.

Which Second Does G Introduce?

J. Rogers, SE Ohio On the Self-Referential Temporal Ambiguity of the Gravitational Constant A Foundational Critique of Dimensional Analysi...