Mastodon Politics, Power, and Science

Monday, May 4, 2026

Breaking the GM Degeneracy

J. Rogers, SE Ohio

A Deep-Space Oscillating Test Mass Experiment to Determine

the Gravitational Constant G to Nine Significant Figures

Abstract

The gravitational constant G remains the least precisely known fundamental constant in physics, determined to only approximately five significant figures after three centuries of effort. We identify the structural cause: G and the mass M of any gravitating body large enough to produce a measurable field are observationally inseparable. Every astronomical measurement yields only the product GM. Cavendish-style laboratory experiments have failed to converge across 300 years. We propose a conceptually simple resolution: construct a toroidal mass of precisely known composition in deep space, far from competing gravitational sources, bore a hole through its center, and drop a test mass through the hole. The test mass oscillates indefinitely under gravity alone, with period determined entirely by the known surface mass density of the toroid. An LED-based laser interferometer simultaneously tracks oscillation period T, instantaneous acceleration a(t), and distance r(t) from the center hole continuously throughout each oscillation, providing three independent and overdetermined routes to G from a single data stream. Because the mass is known from construction rather than inferred from gravity, the GM degeneracy is broken for the first time. The test mass is repeatedly lifted, settled, and released, accumulating thousands of independent period measurements per run over a mission lifetime exceeding one year on station. Tidal contamination from the Sun at 3 AU is verified by calculation to be 1.1 × 10⁸ times smaller than the measurement signal, and galactic tidal forces are 6.7 × 10⁷ times smaller. The deep space environment is not merely quiet — the rest of the universe has gone effectively silent. All required technologies are flight-proven. The mission requires no new physics and no new engineering principles.

1. The Problem: GM is What Nature Exposes

Newton's law of gravitation is conventionally written as:

F = G M m / r²

and general relativity encodes gravitational geometry through terms of the form GM/c²r. In both frameworks, G and M appear as a product. This is not a mathematical convenience — it reflects a deep operational fact: no measurement that relies on gravity to establish the behavior of a gravitating body can separate G from M. The observable is always GM.

NASA's operational practice makes this explicit. Planetary ephemerides, spacecraft navigation, and GPS relativistic corrections all use the gravitational parameter mu = GM, known for Earth to approximately nine significant figures. The individual values of G and M are not used. They cannot be, because the mass of any astronomically significant body is inferred from its gravitational behavior, making any G determination from astronomical sources tautological. If we knew the mass of the Earth independently, we would know G to identical precision. We do not, and the circularity is exact and complete.

G is not philosophically uncertain. It has an exact value. Nature knows it precisely. The uncertainty is entirely on the measurement side, and the measurement side has a specific, identifiable structural problem that has gone unresolved for three centuries.

2. The Failure of Laboratory Methods

The Cavendish torsion balance, introduced in 1798, was designed to escape this circularity by using laboratory-scale masses whose weight could be determined independently through mechanical means. After 300 years of refinement across dozens of independent experiments at leading metrology institutes worldwide, the results do not converge.

Recent high-precision determinations of G disagree with each other by 40 to 50 parts per million, while individual experiments claim uncertainties of 10 to 20 parts per million. The discrepancy between results exceeds the claimed precision by a factor of two to five. CODATA periodically widens the accepted uncertainty interval to accommodate the spread.

The fundamental difficulty is the signal-to-noise environment. Gravity is the weakest force. Laboratory-scale masses produce gravitational forces at or below the level of seismic noise, thermal expansion, electrostatic coupling, and the gravitational influence of nearby structures. No experimental design has succeeded in isolating the gravitational signal cleanly from this noise floor at the required precision. Three hundred years is sufficient to conclude this is not a solvable engineering problem in the terrestrial environment.

3. The Consequence: A Blurred Axis

The inability to determine G precisely propagates directly into every dimensionless ratio that crosses the gravitational-electromagnetic interface. The Planck mass is defined as:

m_P = sqrt(hbar * c / G)

and inherits the full uncertainty of G. The fine structure constant alpha is known to twelve significant figures. The ratio m_p/m_P — proton mass to Planck mass — is a fundamental dimensionless number that any unified theory must address. It is known to only five significant figures, not because of any difficulty on the electromagnetic side, but entirely because G limits the determination of m_P.

Numerical computation confirms the scaling precisely. Across the current uncertainty range of G — approximately 15 parts per million — the natural unit scale derived from hbar, c, and G shifts by 7.5 parts per million, following the theoretical scaling X proportional to G^(-1/2) exactly. The internal self-consistency within any fixed value of G is maintained to floating-point precision (~10^-16). The geometry is exact. The uncertainty is entirely in our measurement access to G.

G is a unit conversion factor — the bridge between our independently-defined kilogram and the natural geometry of gravity. Like c, it would be 1 by construction if our mass unit had been defined gravitationally. The constants G and c are not deep truths about nature. They are artifacts of defining length, time, and mass independently using different physical processes. The scandal is not that G is uncertain. The scandal is that we have treated a units alignment problem as a fundamental mystery for three centuries.

Any theory proposing an exact relationship between m_p/m_P and alpha cannot be tested better than five significant figures. The relationship may be exact, numerically sitting in plain sight, and we cannot resolve it. This is structurally identical to how epicyclic astronomy concealed elliptical orbits for over a millennium. Epicycles made accurate predictions. The system was internally consistent. And buried inside that predictive success was exact geometry the representation could not expose. GM is our epicycle. The exact dimensionless ratios connecting gravity to electromagnetism are hidden inside a perfectly predictive framework that cannot expose them.

4. The Proposed Experiment

4.1 Core Concept

The GM degeneracy is broken by one structural change: use a gravitating body whose mass is known from construction, not from its gravitational behavior. A body assembled from measured components in a zero-gravity environment has a mass determined by the sum of its parts, each weighed through the metrological chain anchored to the SI kilogram, independently of gravity. This mass M is known before the gravitational measurement begins.

A flat toroidal disk — a large washer — with a hole bored through its center axis is the chosen geometry. A test mass m is dropped through the hole. It oscillates back and forth through the disk under gravity alone. The period of this oscillation depends only on the surface mass density sigma of the disk, which is known from construction. The measurement of G reduces to a measurement of oscillation period, acceleration, and distance — all tracked simultaneously by a single LED interferometer.

4.2 Physics of the Toroidal Oscillator

For a uniform infinite plane of surface mass density sigma, the gravitational acceleration is constant on both sides:

g = 2 * pi * G * sigma

independent of distance from the surface. A test mass released from rest at height h above the disk undergoes simple harmonic motion with period:

T = 2 * pi * sqrt(h / (2 * pi * G * sigma))

The critical feature is that period is independent of amplitude. As the oscillation slowly damps, T remains constant. Every cycle from first to last gives the same measurement of G. The experiment does not degrade as energy dissipates. The LED interferometer simultaneously tracks three independent observables throughout every oscillation:

— T: oscillation period, from precise timing of successive zero crossings at the disk plane

— a(t): instantaneous gravitational acceleration, from the second time derivative of interferometric position

— r(t): distance from the center of the hole at every instant, providing continuous geometry verification and confirming the mass distribution model

These three observables are overdetermined for a single unknown G. Their mutual consistency throughout each oscillation provides direct internal systematic error estimation with no additional apparatus. Any unmodeled perturbation shows up as inconsistency between the three routes to G before it corrupts the result.

4.3 Washer Construction: Batteries as Known Mass

The toroidal disk is constructed from cylindrical battery cells packed into a toroidal form and encased in a precision metal shell. The batteries serve a dual purpose: they are the power source for the mission and they constitute the primary known mass. No dead-weight ballast is carried. Every kilogram of structural mass is simultaneously delivering power.

The metal shell provides a uniform, precisely characterized outer geometry. Shell thickness is known. Battery cell geometry and individual mass are measured before assembly. Total mass M is the sum of precisely accounted components, all measured on the ground before launch. The surface mass density sigma = M / pi(R_outer^2 - R_inner^2) is known to the precision of the pre-launch mass accounting.

Battery discharge does not meaningfully change the mass. The chemical energy released is accounted for by E = mc^2 at a level far below the measurement floor. Mass M is effectively constant throughout the mission lifetime, and any slow drift is trackable from power consumption telemetry.

4.4 LED Interferometer

The measurement instrument is an LED-based Michelson interferometer. A narrow-bandwidth LED with a bandpass filter provides sufficient coherence length for path differences involved in tracking oscillation amplitudes of order one meter. Power consumption is in the milliwatt range. No laser cooling, no frequency stabilization, and no moving optical components are required.

At 3 AU from the Sun the sky is dark. There is no solar background to filter against. Starlight provides negligible interferometric noise. The deep space environment is here an advantage: the LED signal completely dominates the detector with no competing background illumination.

The test mass carries a retroreflector. The interferometer arm length changes as the test mass oscillates, producing fringes that encode position continuously. Zero crossings at the disk plane are timed to atomic clock precision. The combination of T, a(t), and r(t) as continuous functions throughout each oscillation delivers G from three independent routes simultaneously from a single passive optical system.

4.5 Operational Procedure

The test mass is lifted to a specified height above the hole center and held. The LED interferometer confirms the test mass is at rest — zero velocity, stable position — before release. The actuator releases. The test mass falls through the hole, decelerates on the far side, returns, and oscillates. The interferometer tracks every cycle continuously.

The oscillation runs until amplitude has decayed to near the noise floor or lateral drift approaches the hole wall. The actuator catches the test mass, lifts it back to the starting height, and waits for the interferometer to confirm stillness. A new run begins. This cycle repeats throughout the on-station mission phase.

Each run provides thousands of independent period measurements. Each lift-settle-release cycle is independently characterized. Run-to-run consistency is a direct systematic error check. The experiment is not a one-shot measurement — it is the same experiment performed thousands of times with the same apparatus in a zero-noise environment, accumulating statistics continuously for over a year.

4.6 Mission Architecture

The washer payload is delivered to the target location by a conventional booster. On arrival the booster releases the washer and fires away to a minimum separation distance of 100 kilometers. At this separation the booster's gravitational influence on the test mass oscillation is negligible, and the booster acts as a radio relay — receiving low-power data from the washer and forwarding it to Earth at full deep space communication power. The booster requires no further maneuvering and recedes on a diverging trajectory.

Following separation, the washer is left to settle. Vibration from separation, thermal distortion as the system reaches radiative equilibrium, and any residual rotation are all monitored by the LED interferometer and allowed to damp naturally. This settling phase may take weeks to months. Formal measurement runs do not begin until the interferometer confirms the system is at rest to the required precision. The settling period characterizes the system in detail and verifies the mass distribution model against the measured gravitational field geometry before any G determination is attempted.

The target location is at or beyond 3 AU from the Sun. The solar tidal gradient across the experimental apparatus at this distance is verified by calculation (Section 5) to be more than 100 million times smaller than the measurement signal. The precise location is determined by a standard orbital mechanics trade between mission delta-V and acceptable tidal contamination. Minimum on-station measurement duration is one year.

4.7 Power Budget

The LED interferometer operates in the milliwatt range. The test mass actuator draws power only during brief lift and release operations. The onboard computer handles data logging and interferometer control at low clock rates. The radio transmitter to the booster relay at 100 kilometers requires trivial power at that range. Total payload power budget is estimated at 20 to 30 watts continuous. The toroidal battery mass, sized for the known mass requirement of the experiment, provides this power for the required mission lifetime with margin.

5. Tidal Contamination Analysis

The primary concern for any deep space gravitational experiment is contamination from external tidal forces. We calculate the tidal acceleration from all significant sources across the 1-meter scale of the apparatus and compare to the measurement signal.

5.1 Signal Strength

For a toroidal mass M = 5000 kg with a test mass at distance r = 1 meter from the center hole, the gravitational acceleration constituting the measurement signal is:

a_signal = G*M / r^2 = (6.674e-11)(5000) / (1)^2 = 3.34e-7 m/s^2

This is the reference against which all tidal contaminations are compared.

5.2 Solar Tidal Force

The tidal acceleration from the Sun across an apparatus of length delta_r is:

a_tidal = 2 * G * M_sun / R^3 * delta_r

where R is the heliocentric distance. At 1 AU this gives 7.93 × 10⁻¹⁴ m/s² across 1 meter — already 4.2 million times smaller than the signal. The tidal force scales as 1/R³, so at 3 AU it drops by a further factor of 27:

a_tidal,Sun (3 AU) = 2.94e-15 m/s^2

This is 113 million times smaller than the measurement signal. As a fraction of the signal it represents 8.8 parts per billion — well below the 1 part per billion threshold required for nine significant figures in G.

5.3 Galactic Tidal Force

The local galactic tidal acceleration is known from stellar dynamics and pulsar timing studies to be approximately 5 × 10⁻¹⁵ m/s² per meter of apparatus length. For the 1-meter scale of this experiment:

a_tidal,Galaxy = 5.00e-15 m/s^2

This is 67 million times smaller than the signal — comparable to the solar tidal and equally negligible.

5.4 Planetary Tidal Forces

Jupiter, the most massive planet, presents a tidal acceleration across 1 meter of approximately 1.18 × 10⁻¹⁸ m/s² at a conservative minimum separation of 4 AU. This is 280 billion times smaller than the signal and requires no further consideration.

5.5 Summary

Table 1 summarizes all tidal contamination sources. The worst-case external contamination — the galactic tidal force — is 67 million times smaller than the measurement signal. At 3 AU, the rest of the universe has gone effectively silent. This is not a marginal improvement over the terrestrial environment. It is a qualitative change in what measurement is possible.

Source Tidal Acceleration (m/s²) Ratio to Signal Orders of Magnitude Below Signal
Toroid 5000 kg at 1 m (Signal) 3.34 × 10⁻⁷ 1 (reference)
Sun at 1 AU across 1 m 7.93 × 10⁻¹⁴ 4.2 × 10⁶ × smaller 6.6
Sun at 3 AU across 1 m 2.94 × 10⁻¹⁵ 1.1 × 10⁸ × smaller 8.1
Milky Way galaxy across 1 m 5.00 × 10⁻¹⁵ 6.7 × 10⁷ × smaller 7.8
Jupiter at 4 AU separation across 1 m 1.18 × 10⁻¹⁸ 2.8 × 10¹¹ × smaller 11.4

Table 1. Tidal contamination at 3 AU compared to measurement signal (M = 5000 kg, r = 1 m, delta_r = 1 m).

The solar tidal force at 3 AU, as a fraction of signal, is 8.8 parts per billion. For reference, nine significant figures of precision in G requires controlling systematics to 1 part per billion. The solar tidal is below this threshold by a factor of nearly 9. For experiments requiring fewer than nine figures of precision, 3 AU provides ample margin. For the full nine-figure target, an orbit at 4 AU reduces solar tidal contamination by a further factor of 2.4, comfortably below the threshold.

6. Statistical Power of Repeated Oscillation

The fundamental advantage of this experiment over every previous G determination is statistical accumulation. A single period measurement T has some uncertainty epsilon from timing precision and environmental noise. After N independent cycles, the uncertainty on the mean period is epsilon / sqrt(N).

In a one-year on-station mission with oscillation periods of order minutes, the number of measurable cycles is of order tens of thousands per run and millions across the full mission. The statistical reduction factor sqrt(N) is of order 1000. Random errors that would limit a single measurement to five significant figures are beaten down to nine or more by accumulation alone.

No ground-based experiment has ever had this. Cavendish apparatus yields one measurement per configuration. Resets are slow and noisy. N never gets large. The noise floor never drops because statistics never accumulate. Here N is limited only by mission lifetime. The experiment improves continuously as long as the apparatus operates.

The three simultaneous observables — T, a(t), and r(t) — provide independent routes to G from the same data stream. Their mutual consistency serves as a continuous systematic error monitor throughout the mission. Any unmodeled perturbation that would corrupt one observable will show up as inconsistency among all three before it biases the G determination.

7. Scientific Return

A determination of G to nine significant figures immediately propagates precision improvement through every dimensionless ratio in physics that involves gravity. The Planck mass m_P = sqrt(hbar*c/G) becomes known to nine figures. The ratio m_p/m_P — proton mass to Planck mass — sharpens from five to nine significant figures with no additional measurement on the electromagnetic side.

The gap between five and nine significant figures is where proposed exact relationships between m_p/m_P and alpha either are confirmed or are falsified. A correct unified theory would predict this ratio exactly as a function of the fine structure constant and other dimensionless electromagnetic parameters. Such predictions are currently untestable beyond five figures. This experiment makes them testable to nine.

Every GM product for solar system bodies simultaneously becomes a precise mass determination. The mass of the Earth, the Moon, Mars, and Jupiter — all known to nine figures immediately by dividing their known GM by the newly precise G. This is a complete remeasurement of solar system masses at no additional observational cost.

8. Cost and Comparison

The cumulative cost of Cavendish-style G determinations over the past century, across major metrology institutes in multiple countries, has been substantial. No convergence has been achieved. Three hundred years of investment has produced not precision improvement but a widening recognition that the terrestrial environment is fundamentally the wrong place to do this experiment.

The proposed mission is less technically complex than many current planetary science missions. It requires no landing, no sample return, no complex in-situ chemistry, no precise pointing at a distant astronomical target. It requires transporting a known mass to deep space, releasing a booster, and operating an LED interferometer and test mass actuator for one or more years. The payload has no moving parts except the test mass actuator. The primary instrument draws milliwatts.

The mission fits within the cost envelope of an ESA Medium-class or NASA Discovery-class science mission. The scientific return — resolving a 300-year measurement failure and opening the gravitational-electromagnetic interface to genuine precision tests — is disproportionate to the engineering investment. A formal feasibility study is the appropriate immediate next step.

9. Conclusion

G has an exact value. The universe does not have error bars. The uncertainty in G is entirely on the measurement side and has a specific identifiable cause: we have never had independent access to the mass of a body large enough to produce a measurable gravitational field. Every previous approach either uses astronomical bodies whose masses are inferred from gravity, or uses laboratory masses too small to overcome the terrestrial noise floor.

The proposed experiment resolves this by construction. A toroidal battery mass of precisely known composition is placed at 3 AU from the Sun. A test mass oscillates through its central hole under gravity alone. An LED interferometer tracks period, acceleration, and distance simultaneously through thousands of oscillations over a mission lifetime exceeding one year. The mass is known before the gravitational measurement begins. The GM degeneracy is broken.

Tidal contamination from all external sources — Sun, galaxy, planets — is verified to be between 67 million and 280 billion times smaller than the measurement signal. The deep space environment does not merely reduce noise. It eliminates it.

The result is G to nine significant figures, the Planck jacobians to nine figures, and every dimensionless ratio at the gravitational-electromagnetic interface sharpened by four to five significant figures. Proposed exact relationships between m_p/m_P and alpha become directly testable as a ratio against kinematics for the first time. No new physics is required. No new engineering principles are required. The only reason this has not been done is that it falls between the institutional mandates of metrology and deep space science. That gap should be closed.

References

[1] CODATA 2018 recommended values of the fundamental physical constants. Rev. Mod. Phys. 93, 025010 (2021).

[2] Cavendish, H. Experiments to determine the density of the Earth. Phil. Trans. R. Soc. London 88, 469-526 (1798).

[3] Gillies, G.T. The Newtonian gravitational constant: recent measurements and related studies. Rep. Prog. Phys. 60, 151 (1997).

[4] Rothleitner, C. & Schlamminger, S. Measurements of the Newtonian constant of gravitation. Rev. Sci. Instrum. 88, 111101 (2017).

[5] Quinn, T. et al. Improved determination of G using two methods. Phys. Rev. Lett. 111, 101102 (2013).

[6] Rosi, G. et al. Precision measurement of the Newtonian gravitational constant using cold atoms. Nature 510, 518-521 (2014).

[7] Armano, M. et al. Sub-Femto-g Free Fall for Space-Based Gravitational Wave Observatories: LISA Pathfinder Results. Phys. Rev. Lett. 116, 231101 (2016).

[8] Folkner, W.M. et al. The planetary and lunar ephemeris DE 430 and DE 431. Interplanet. Netw. Prog. Rep. 196, 1-81 (2014).

[9] Mohr, P.J., Newell, D.B. & Taylor, B.N. CODATA recommended values of the fundamental physical constants: 2014. Rev. Mod. Phys. 88, 035009 (2016).

[10] Duff, M.J. How fundamental are fundamental constants? Contemp. Phys. 56, 35-47 (2015).

[11] Iorio, L. Galactic tidal effects on the Oort Cloud and the outer solar system. MNRAS 443, 2523-2534 (2014).

Sunday, May 3, 2026

Which Second Does G Introduce?

J. Rogers, SE Ohio

On the Self-Referential Temporal Ambiguity of the Gravitational Constant

A Foundational Critique of Dimensional Analysis in Gravitational Physics

Abstract

Newton's gravitational constant G carries units of m³ kg⁻¹ s⁻². The s⁻² term introduces a specific time scale into the law of gravitation. However, general relativity establishes that gravity is not a force acting across a fixed time — it is a gradient of time rates. Every point in a gravitational field has its own proper time, running at a rate that depends on the local gravitational potential. This paper poses a question that has not been formally addressed in the literature: which second does G introduce? We demonstrate that this question has no well-defined answer, that G's temporal dimension is therefore physically ambiguous, and that this ambiguity is the root cause of G's notorious measurement inconsistency across experiments spanning 200 years. We further show that G/c² — which appears in the dimensionless gravitational parameter τ = (G/c²)(m/r) — is free of this ambiguity, is known to GPS precision (10 significant figures), and is the only combination of G that the universe actually uses.

1. The Unit Contamination Problem

Newton's law of gravitation is standardly written as:

F = G · mM / r²

where F is force in kg·m·s⁻², m and M are masses in kg, r is distance in meters, and G = 6.674 × 10⁻¹¹ m³ kg⁻¹ s⁻². The dimensional structure reveals an immediate problem: the quantities mM/r² have units of kg²/m². They contain no time. Gravity, as a geometric relationship between masses and distances, introduces no clock.

G injects s⁻² into this equation for one reason only: Newton's second law F = ma defines force to include acceleration, which is measured against a clock. The second was already in F via kinematics. G absorbs s⁻² as a compensating factor to preserve dimensional consistency across a unit system that was never designed for gravitational physics.

The alternative is immediate. Define force geometrically:

F ≡ mM / r² [units: kg²/m²]

Then gravity is exact, unit-free in the physical sense, and contains no clock. The constant migrates:

F = ma / G ⇒ G = ma / F

G becomes the constant of inertia — the conversion factor between geometric force and kinematic response. The temporal ambiguity now lives in kinematics, where it belongs, not in the description of gravitational geometry.

2. Gravity Is a Time Gradient

General relativity does not describe gravity as a force. It describes gravity as spacetime curvature, and in the weak-field limit, this curvature is predominantly temporal. The gravitational redshift formula is:

Δf/f = ΔΦ / c² = (G/c²) · (M/r)

A clock deeper in a gravitational well runs slower. The rate difference between two clocks at different gravitational potentials is continuous, position-dependent, and exact. This is not a perturbative correction to flat-space physics — it is the physics. Gravity IS the gradient of proper time rates across space.

GPS confirms this operationally. Satellite clocks must be corrected for gravitational time dilation to maintain nanosecond synchronization. These corrections are computed using τ = (G/c²)(M/r) and are accurate to 10 significant figures. GPS does not fail at the 5th significant figure despite G being known only to 5 significant figures.

This is the central empirical fact that demands explanation.

3. The Self-Referential Temporal Ambiguity

We now state the core problem precisely.

G has units m³ kg⁻¹ s⁻². The s⁻² encodes a specific time rate — a second. Every Cavendish-style measurement of G uses a clock to measure acceleration, force, or oscillation period. That clock runs at a rate determined by the local gravitational potential.

But G is supposed to describe the gravitational potential itself.

The second embedded in G is therefore evaluated inside the very field that G is meant to characterize. This is not a small systematic error. It is a logical circularity:

• To measure G you need a clock.

• Your clock rate depends on the local gravitational potential Φ.

• Φ depends on G.

• Therefore G measured anywhere depends on G at that location.

G is not a universal constant. It is a local quantity, contaminated by the gravitational potential of the measurement site, that has been treated as universal because the contamination is small enough at Earth's surface to hide within experimental uncertainty — until experiments became precise enough to disagree.

The 40-sigma disagreement between precision G measurements — experiments disagreeing by 40 times their stated error bars — is not experimental incompetence. It is the universe signaling that the quantity being measured is not well-defined.

4. Which Second? The Gradient Problem

Consider a gravitational field with potential Φ(r). The proper time rate at position r relative to a clock at infinity is:

dτ/dt = √(1 + 2Φ(r)/c²) ≈ 1 + Φ(r)/c² [weak field]

In a gradient, every point r has a distinct proper time rate. There is no canonical 'the second' in a gravitational field. The second at r₁ and the second at r₂ differ by:

Δ(dτ/dt) = Φ(r₁)/c² - Φ(r₂)/c² = (G/c²)(M/r₁ - M/r₂)

When a Cavendish experiment uses a torsion fiber with period T to extract G, T is measured in coordinate seconds at the lab's gravitational potential. When an atom interferometry experiment uses laser pulse timing to measure acceleration, those pulse intervals are proper time intervals at the apparatus's location. The two experiments embed different seconds into their extracted values of G, and neither second has been corrected to a common reference.

The question 'which second does G introduce?' therefore has the answer: whichever second existed at the location and gravitational potential of the measurement, uncorrected for the field being measured. This is not a universal second. It is a local, potential-dependent, self-referentially contaminated second.

5. G/c² Is the Clean Quantity

The combination G/c² is free of this ambiguity. To see why, note that c is also measured locally using local clocks. The local second that contaminates G also contaminates c² in the same measurement context. When you form G/c², the local temporal factor cancels:

G/c² = [m³ kg⁻¹ s⁻²] / [m² s⁻²] = m / kg

The seconds are gone. G/c² has units of meters per kilogram — a purely geometric ratio. It is the Schwarzschild radius per unit mass, the conversion factor between mass and the spatial curvature it produces.

The dimensionless gravitational parameter is then:

τ = (G/c²) · (m / r) = (l_P / m_P) · (m_SI / r_SI)

where l_P = √(hG/c³) and m_P = √(hc/G) are the Planck length and mass. Crucially, l_P/m_P = G/c² exactly. The Planck quantities are not fundamental here — they are a convenient factorization that makes explicit what is happening: the SI unit standards for length and mass (l_P, m_P) are introduced and then immediately cancelled by the actual physical ratio m_SI/r_SI. What remains is pure dimensionless physics.

τ is the same number regardless of where in a gravitational gradient you compute it, because G/c² carries no net temporal dependence. This is why GPS works to 10 significant figures using τ while G itself is uncertain at the 5th figure.

6. The Measurement Implication

If G/c² is the physically clean quantity, the experimental program should measure G/c² directly rather than G alone. Several consequences follow:

• Atom interferometry experiments that measure gravitational acceleration a = GM/r² and laser interferometry experiments that measure r with electromagnetic precision are already measuring G/c² implicitly. The c² enters through the electromagnetic calibration of the length standard.

• Experiments that attempt to measure G in isolation — by measuring a gravitational force against a mass standard defined by the kilogram — are attempting to separate G from c² in a context where the universe has no opinion about that separation.

• The disagreement between G measurements performed by different methods may reflect genuine physical differences in the local gravitational potential of each laboratory, uncorrected for the temporal self-reference described in Section 3.

A 2026 NIST proposal to measure G via laser spectroscopy of the axion Compton frequency — connecting G to h, e, and nucleon masses — is the correct structural approach. It measures G through electromagnetic invariants, which share the same local temporal frame as c, and therefore directly accesses G/c² in the physically meaningful sense.

7. The Hume Boundary in Physics

The deeper issue is epistemological. A measurement is not a property that an object possesses. It is a ratio between an object and an arbitrary unit standard. The table does not have a length; it has a ratio to the meter. The meter is a convention, adopted in Paris in 1793, with no physical necessity.

Newton's equation F = GmM/r² embeds three independent arbitrary conventions — the meter, the kilogram, and the second — inside G. These conventions were chosen for unrelated practical reasons by humans at a specific historical moment. There is no physical reason they should combine into a clean gravitational constant. They do not.

This is Hume's is-ought distinction applied to metrology. From descriptive physical facts you cannot derive normative unit definitions. No chain of measurements proves that a mile has 5280 feet. That is a social fact, true inside the convention, meaningless outside it.

G is partly a social fact. The 6.674 × 10⁻¹¹ carries the fingerprints of 18th century French surveying decisions. τ does not. τ is the universe's own dimensionless statement about relativistic compactness, independent of every convention ever adopted.

8. Conclusions

We have identified a fundamental ambiguity in the gravitational constant G: its s⁻² dimensional factor encodes a specific time rate, but gravity is a gradient of time rates with no single canonical value. The second embedded in every measurement of G is local, potential-dependent, and self-referentially contaminated by the field G is meant to describe.

This ambiguity predicts exactly what is observed: G measurements disagree across experiments by far more than their stated uncertainties, and no experimental improvement has resolved the disagreement in 200 years. The quantity being measured is not well-defined.

G/c² is well-defined. Its temporal factors cancel exactly, leaving a purely geometric m/kg ratio. GPS navigation confirms G/c² to 10 significant figures without requiring G to 10 significant figures. The universe computes τ = (G/c²)(m/r). It does not compute G and c² separately.

The experimental program for precision gravitational metrology should target G/c² directly through electromagnetic measurements, where the cancellation of temporal contamination is structurally guaranteed. Continuing to measure G in isolation is continuing to ask the universe a question it has no answer to.

The second in G was always the wrong second. There is no right one.

Note on Priority

The central argument of this paper — that G’s temporal dimension is self-referentially ambiguous because gravity is a time gradient — was developed in conversation and has not, to the authors’ knowledge, been stated in this form in the existing literature. The GPS precision argument for G/c² as the physically clean quantity is an empirical observation available in any precision navigation reference but whose metrological implication for G measurement has not been made explicit.

Friday, May 1, 2026

The complete operational definition of physical law in three lines.

The complete operational definition of physical law in three lines:

  1. Remove input units — cancel the arbitrary human unit standards from the measured quantities (e.g., divide by the non reduced Planck Jacobians. 

  2. Do the physics as a pure ratio — work with dimensionless ratios only. This is the only step that involves the eternal, unit‑free relationships.

  3. Decorate with output units — multiply by the appropriate Planck Jacobians (or any chosen unit standards) to express the result in human‑readable units.

Step 2 is the only physics. Steps 1 and 3 are pure accounting — converting between the dimensionless reality and our arbitrary measurement conventions.

Redefining Force Units to Expose G as a Metrological Artifact

J. Rogers, SE Ohio 

and a Proposal for High-Precision Measurement of 1/G as a Pure Inertial Constant

Abstract

The gravitational constant G does not describe a physical property of the universe. It is a conversion factor — a metrological patch — that exists because humans defined the unit of force (the Newton) in a way that is misaligned with the natural geometric structure of mass-distance interactions. This paper demonstrates that by redefining force to carry units of kg²/m², G vanishes from the law of gravitation entirely and reappears as k = 1/G inside Newton's Second Law, F = kma. In this reframing, k is a pure inertial constant with no connection to gravity as a phenomenon. We then propose a high-precision experimental design to measure k directly through a clean inertial acceleration experiment using laser interferometry — bypassing the torsion balance entirely and achieving precision that Cavendish-style gravity experiments cannot reach.

1. The Problem with the Newton

The SI unit of force — the Newton — is defined as:

1 N = 1 kg · m/s²

This definition was chosen for convenience. It makes Newton's Second Law trivially true:

F = ma

Because force is defined as mass times acceleration, F = ma contains no physical information whatsoever. It is a tautology. It is a statement about unit definitions, not about nature.

This convenience, however, creates a serious problem. When Newton wrote down the law of universal gravitation:

F = G · (m₁ m₂) / r²

the constant G had to be inserted to make the units balance. The left side has units of kg·m/s². The right side, without G, has units of kg²/m². G carries units of m³/(kg·s²) precisely to bridge this mismatch.

G is not telling us something about gravity. G is telling us that our unit system is incoherent relative to the natural geometry of mass interactions.

2. Redefining Force to Kill G in Gravity Law

Define a new unit of force such that force carries units of kg²/m². That is, force is defined as the natural product of mass-mass interaction over distance squared:

F_new ≡ m₁ m₂ / r² [units: kg²/m²]

Under this definition, the law of gravitation becomes exactly:

F_new = m₁ m₂ / r²

G has disappeared. Not because we set G = 1 by fiat, but because the force unit is now defined in the same geometric terms as the right-hand side. There is no mismatch to correct.

This is not a new physics claim. No experiment is affected. All predictions remain identical. We have changed nothing about the universe. We have only chosen a unit origin that is coherent with the natural geometry of mass interactions.

3. Where G Goes: The Inertial Constant k

Because we have redefined force, Newton's Second Law can no longer be a tautology. F_new and ma do not have the same units:

• F_new has units of kg²/m²

• ma has units of kg·m/s²

To write a second law connecting force to motion, we must introduce a constant k that carries the unit mismatch:

F_new = k · m · a

Dimensional analysis forces the value of k. Since F_new = k · F_SI, and F_SI = ma, we get:

k  ≈  kg s²/m^3

k is not a new constant. It is G, relocated. Previously G sat inside the gravity equation as a signal of metrological incoherence. Now k sits inside the inertial equation for the same reason. The physics is identical. What has changed is where the constant lives — and that change has consequences for measurement.

4. Why This Matters for Measurement

G is the least precisely measured fundamental constant in physics. After centuries of effort, it is known only to approximately 5 significant figures — far worse than constants like c (exact by definition), h (exact by definition), or e (10 significant figures).

The reason is straightforward: the Cavendish torsion balance is trying to detect an extraordinarily weak gravitational signal against a background of seismic noise, thermal drift, and mechanical vibration. The signal-to-noise ratio is brutal. Every attempt to improve precision runs into the same physical limitations of the torsion balance geometry.

In the reframed system, k is a pure inertial constant. It has nothing to do with gravity as a phenomenon. Measuring k requires:

  • A known force in the new unit system (kg²/m²)

  • A known mass

  • A precise measurement of the resulting acceleration

Acceleration measurement by laser interferometry can resolve displacements at the picometer scale. This is not the limiting factor. The question is whether we can realize a force in kg²/m² units with sufficient precision to make the experiment meaningful — and the answer is yes, as described in the following section.

5. Experimental Proposal: The Inertial Calibration Experiment

The goal is to measure k = 1/G by applying a known force in kg²/m² units to a known mass and measuring the resulting acceleration with laser interferometry. The experiment is entirely non-gravitational in character — gravity is used once, at the beginning, to define the unit of force, and then plays no further role.

5.1 Step One: Define and Realize the Unit Force

The unit force in the new system is defined to be eactly by the kg^2 and m^2 in itse definition.  The force is exactly the two masses squared divided by the meter measurements.  No uncertainty in the gravity measurement.  Just like in the past F = ma had no uncertainty. 

F_unit = M² / r² [= 1 unit of force by definition]

This is not a measurement. It is a definition. The numerical value of F_unit in new units is exactly M²/r² by construction. The precision of this step is limited only by the precision of M and r — both of which are controlled by national metrology standards to better than 1 part in 10⁸.

The physical realization of this force does involve the gravitational attraction between the spheres. But we are not measuring that attraction. We are defining it to be our unit. The Cavendish apparatus fails because it tries to measure an extremely weak signal. We avoid that entirely by simply declaring the signal to be 1.

5.2 Step Two: Transfer the Force to the Inertial Track

The gravitational attraction between the tungsten spheres is used to calibrate a force transducer — a precision electrostatic actuator or a cryogenic force balance — that can then apply the same magnitude of force to a test mass in a completely separate, clean environment.

This separation is critical. The inertial measurement environment is:

  • Seismically isolated (optical table on active dampers)

  • Temperature-controlled to millikelvin stability

  • In high vacuum (< 10⁻⁸ mbar) to eliminate air damping

  • Shielded from electromagnetic interference

The test mass m is suspended on a frictionless linear guide (magnetic levitation or superconducting bearing) so that it is free to accelerate along one axis without mechanical contact.

5.3 Step Three: Measure Acceleration by Laser Interferometry

Apply the calibrated 1-unit force F_unit to the test mass m. The test mass begins to accelerate. Track the displacement x(t) of the test mass over time using a heterodyne laser interferometer locked to an iodine-stabilized reference laser.

The interferometer resolves displacement to better than 1 pm over measurement intervals of seconds. From the displacement-time record, acceleration a is extracted by fitting to the kinematic relation:

x(t) = ½ a t²

The fit is performed over thousands of independent measurement runs. Statistical averaging suppresses random noise by √N, where N is the number of runs. Systematic errors — laser frequency drift, test mass charging, residual gas pressure — are characterized and subtracted.

5.4 Step Four: Extract k

From the known force F_unit, the known mass m, and the measured acceleration a, k is directly:

k = F_unit / (m · a)

Since F_unit = 1 by definition in the new unit system:

k = 1 / (m · a)

With m known to 1 part in 10⁸ and a measured to a precision limited by the interferometer and averaging time, the achievable precision for k — and therefore for 1/G — is expected to significantly exceed the current 5-significant-figure limit on G.

6. Error Budget

The dominant error sources and their estimated contributions are as follows:

  • Mass standard uncertainty: < 1 part in 10⁸ (traceable to BIPM kilogram definition via Kibble balance)

  • Length standard uncertainty: < 1 part in 10⁹ (traceable to iodine-stabilized laser)

  • Force transducer calibration: estimated 1-5 parts in 10⁷ (dominant term, improvable with cryogenic force balance)

  • Interferometer displacement noise: < 1 pm/√Hz (suppressed by averaging)

  • Residual gas damping: < 1 part in 10⁸ at 10⁻⁸ mbar

  • Seismic noise: suppressed by active isolation to < 1 nm RMS at 1 Hz, negligible over measurement timescale

The experiment is not limited by quantum noise, thermal noise, or any fundamental physical barrier at the target precision. It is limited by engineering — specifically by the precision of the force transducer. This is an improvable engineering problem, not a fundamental one.

7. What This Experiment Is — and Is Not

This experiment is not a gravity experiment. It does not measure the strength of gravitational attraction. Gravity appears only in the single act of defining the unit force by the geometry of two masses — and at that step, we are not measuring anything. We are defining.

Everything after that is pure inertia. We are measuring how much a known mass resists a known force. That is a metrological question about the relationship between our mass scale, our length scale, and our time scale. The answer is k = 1/G, but G as a concept plays no role in the measurement itself.

This is why the precision is achievable. The torsion balance fails because it is trying to detect gravity through noise. This experiment detects inertia — which is not weak, not noisy, and not buried under competing signals. It is the most fundamental mechanical property of matter, and we have instruments precise enough to measure it.

8. Conclusion

G is a metrological artifact. It exists in the gravitational law because humans defined force in units (kg·m/s²) that do not match the natural geometric structure of mass interactions (kg²/m²). Redefining force to carry units of kg²/m² eliminates G from the gravity law entirely and moves it — as k = 1/G — into the inertial law F = kma.

In this location, k is measurable by a clean acceleration experiment using laser interferometry. The experiment has no gravitational signal to detect, no torsion fiber to stabilize, and no seismic noise problem. It is limited only by the precision of force transducer engineering, which is an improvable problem.

The result would be the most precise measurement of G ever achieved — not by doing a better gravity experiment, but by recognizing that G was never a gravity constant to begin with.

k = 1/G ≈  10¹⁰ kg s² / m^3

Thursday, April 30, 2026

You Don’t Own the Code That AI Writes for You: A Problem of Ownership in the Age of Generative Coding

J. Rogers, SE Ohio

Abstract

The rapid adoption of generative AI coding tools has created a quiet legal crisis. Companies are replacing human developers with AI, building massive codebases from machine‑generated output, and assuming that they own the resulting intellectual property. Under current US copyright law, this assumption is largely false. This paper explains why purely AI‑generated code cannot be copyrighted, why the distinction between “AI‑generated” and “AI‑assisted” matters, and how the rush to replace human programmers is producing millions of lines of legally orphaned code. The paper concludes with practical risks and recommendations for organizations relying on AI coding tools.

1. Introduction

In 2026, a software engineer in Stockholm told the New York Times: “I probably spend more than my salary on Claude.” Across the industry, AI coding assistants are being framed as efficiency miracles. Yet a fundamental legal question has been largely ignored: Who owns the code that an AI writes for you?

The answer from the US Copyright Office, federal courts, and the Supreme Court (by refusal to hear the contrary) is clear: No human author, no copyright. If a company’s codebase contains large volumes of purely AI‑generated code, that code is effectively public domain. Competitors can copy it. Security flaws can be copied without remedy. And the company cannot sue for infringement.

This paper outlines the legal landscape, the critical distinction between “generated” and “assisted” works, and the real‑world consequences for businesses that are currently firing junior developers and replacing them with AI.

US copyright law protects “original works of authorship fixed in any tangible medium of expression.” The Supreme Court has repeatedly held that an “author” must be a human being. In Burrow‑Giles Lithographic Co. v. Sarony (1884), the Court defined author as “he to whom anything owes its origin.”

In 2023, the US Copyright Office issued policy guidance stating that it “will register an original work of authorship, provided that the work was created by a human being.” Works generated by artificial intelligence with no human creative contribution will not be registered.

Applying this to code: If you ask an AI “Write me a function to validate an email address,” and you copy‑paste the output without meaningful human modification, that function is not copyrightable. Anyone may legally copy it.

2.3 Thaler v. Perlmutter – The Supreme Court Refuses to Intervene

Dr. Stephen Thaler attempted to register a work he said was created entirely by an AI system. The Copyright Office refused. The district court and the DC Circuit affirmed. In March 2026, the Supreme Court declined to hear the appeal, letting the lower rulings stand.

The DC Circuit’s opinion is blunt: “Copyright law requires an ‘author’ – a human being.” The court noted that the Copyright Act uses terms like “children,” “grandchildren,” and “widow,” which only apply to natural persons. The outcome is settled: purely AI‑generated works have no copyright protection.

3. The “Generated” vs. “Assisted” Distinction

The Copyright Office draws a critical line:

AI‑Generated AI‑Assisted
The AI produces the expression with no human creative control over the specific form. A human uses AI as a tool, but provides sufficient creative input – editing, selecting, arranging, rewriting.
Not copyrightable. May be copyrightable (the human’s contributions are protected).

In practice, most current use of AI coding tools leans heavily toward “generated.” Engineers type a prompt, receive code, and commit it without meaningful change. This is exactly the scenario the Copyright Office describes as lacking human authorship.

The Office has explicitly warned that iterative prompting – asking the AI to refine its output – does not automatically confer authorship. Unless the human makes original, creative modifications to the AI’s output, the result remains unprotectable.

4. Why “Sweat of the Brow” Doesn’t Matter

Some argue that because they spent hours crafting prompts, they “worked hard” and should own the result. Copyright law rejected “sweat of the brow” decades ago. In Feist Publications, Inc. v. Rural Telephone Service Co. (1991), the Supreme Court held that effort alone does not create copyright; there must be original creative expression.

A perfect prompt that generates perfect code still produces a work whose expression originates from the AI, not the human. Unless the human alters that expression creatively, there is no copyright.

5. Real‑World Consequences for Companies

If a competitor copies a purely AI‑generated function from your product, you have no copyright infringement claim. The code is not yours in the eyes of the law. This undercuts the entire value proposition of proprietary software.

5.2 M&A Due Diligence Nightmare

When a company is acquired, the buyer’s lawyers will ask: “What percentage of your code was AI‑generated without human modification?” A high percentage could make the target’s “crown jewel” IP worthless. Deals will collapse or valuations will crater.

5.3 Security and Liability Traps

You cannot retroactively copyright AI‑generated code that already exists. If that code contains a vulnerability, and a competitor copies it, you have no legal recourse. Worse, if the AI reproduced code from its training set that is subject to a restrictive license (GPL, etc.), you could be sued for infringement by the original human author – while you still own none of your own output.

5.4 The “Vibe Coding” Fad Is Self‑Defeating

The current trend of “vibe coding” – describing an app idea to an AI and committing whatever it produces – is legally catastrophic. Entire startups are being built on code that belongs to no one. Investors who discover this will walk away.

6. Can You Protect AI‑Generated Code Any Other Way?

Copyright is not the only form of IP, but the alternatives are weak:

  • Trade secret – Protects confidential information, but offers no protection if the code is reverse‑engineered or independently discovered. Once AI‑generated code is distributed (e.g., in a compiled app), trade secret protection is largely lost.
  • Patent – Might cover algorithms, but most routine code does not meet the novelty and non‑obviousness requirements. AI‑generated code is unlikely to be patentable.
  • Contract – Terms of service can restrict users, but contracts don’t bind competitors who never agreed to them.

In short, there is no substitute for copyright for protecting the literal expression of software code.

7. Recommendations

For organizations using AI coding tools:

  1. Audit your codebase – Identify which files or functions were AI‑generated with minimal human modification. Flag them as unprotectable.
  2. Change workflows – Require that a human engineer meaningfully edit, rewrite, or arrange any AI‑generated output before committing. Document the creative changes.
  3. Maintain a “human authorship” log – Record who modified what, and what creative choices were made.
  4. Do not replace junior developers – Juniors are the humans who will provide the creative modifications needed for copyright. Without them, you are building an unownable codebase.
  5. Consult legal counsel – The law is evolving. The EU and other jurisdictions may take different approaches. But under current US law, the risk is real and severe.

8. Conclusion

The narrative that AI coding tools are simply “efficiency” ignores a foundational legal reality: you do not own what you do not create. When companies fire entry‑level engineers and replace them with AI, they are not just losing future senior talent – they are losing the legal ability to claim ownership over their own product.

The seed corn is being sold. The code being written today may be free for anyone to take tomorrow. And the executives celebrating quarterly stock bumps will be long gone when the lawyers arrive to ask: “Who wrote this – and do you have the papers to prove it?”

The answer, more often than not, will be no.

Sunday, April 26, 2026

How Planck Accidentally Found the Way Back to Newton

The Detour and the Bridge:

How Physics Mistook a Bookkeeping Constant for a Discovery,

and How Planck Accidentally Found the Way Back to Newton

J. Rogers, SE Ohio

Abstract

Newton’s original statement of universal gravitation was a pure proportionality: force scales with the product of masses and inversely with the square of distance. No units. No constants. Just ratios in proportion to ratios. That statement was physically complete. The gravitational constant G was not a discovery about the universe — it was inserted a century and a half later to convert Newton’s dimensionless proportionality into an equation that balances in human unit systems. Physics then told a story in which G represented a deepening of Newton, a quantification of something Newton had only sketched. That story is wrong.

In 1899 Max Planck, working on an unrelated problem in blackbody radiation, stumbled onto three combinations of h, c, and G that produce units of mass, length, and time independent of human convention. He recognized them as universal and called them natural units. But Planck did not see what his discovery actually was. He had found the exact Jacobians — the conversion factors — that translate Newton’s pure unit-free proportions into any human unit chart and back out again without losing anything. He built the bridge back to Newton without knowing the bridge existed or what it connected.

We show that G is not a constant of nature but a composed Jacobian: G = Fₚ · (lₚ/mₚ)², where Fₚ, lₚ, and mₚ are non reduced Planck units constructed from h, c, and G itself. The physics of gravity lives entirely in the dimensionless ratio X = m₁m₂/r² expressed in Planck-scaled units. G appears only when we demand SI output. It is the price of the equals sign in a human unit chart, not a fact about the universe. Recognizing this, we see that Planck’s 1899 result was not the discovery of a natural unit system — it was the rediscovery of Newton’s natural ratios, dressed in the language of a different century.

1. Newton’s Original Statement

Isaac Newton’s law of universal gravitation, as he understood it, was a statement of proportion. Two bodies attract each other with a force that grows with their masses and diminishes with the square of the distance between them. In the notation Newton worked with, this is:

F ∝ mM/r²

The proportionality sign is doing everything here. It says: if you double one mass, the force doubles. If you double the distance, the force drops to a quarter. The ratios are the physics. Newton was describing how things scale relative to each other, not assigning absolute magnitudes in any particular unit system.

This was not a gap in Newton’s understanding waiting to be filled. It was a complete physical statement. Newton knew that the actual numerical value of the force would depend on how you chose to measure mass, distance, and force — on your unit chart. The proportionality was his way of saying: the physics is in the ratios, not in the numbers.

Newton’s contemporaries and successors understood this. For the century and a half following the Principia, gravitational calculations were done by comparing ratios — the mass of the Earth relative to the Sun, the distance of Venus relative to the Earth — without any need for an absolute constant. The proportionality was sufficient for every astronomical calculation of the era.

2. The Invention of G

The gravitational constant G did not appear in Newton’s Principia. It was not present in the work of the eighteenth century astronomers who used Newton’s law to map the solar system with extraordinary precision. It entered physics in the nineteenth century, when Henry Cavendish measured the density of the Earth using a torsion balance in 1798, and when the need arose to state gravitational attraction as an equation with an equals sign rather than a proportionality.

The problem was this: if you write

F = mM/r²

the dimensions do not balance. The left side has units of force. The right side has units of mass squared divided by length squared. To make the equation dimensionally consistent in any human unit system — SI, CGS, or any other — you need a conversion factor. That factor is G.

G was invented to solve a bookkeeping problem. It carries units of m³ kg⁻¹ s⁻² in SI — units chosen precisely to cancel the dimensional mismatch on the right-hand side of Newton’s equation and produce newtons on the left. G is not measuring anything about gravity. It is measuring the distance between Newton’s dimensionless proportionality and the SI unit chart.

Physics then taught this story: Newton discovered the law, and Cavendish ‘weighed the Earth’ by measuring G, and now we know not just the shape of the law but its strength. This framing implies G is telling us something physical — the intrinsic coupling strength of gravity, some fundamental fact about how strongly matter attracts matter.

That implication is false. The numerical value of G — 6.674 × 10⁻¹¹ in SI units — is determined by the sizes of the kilogram, the meter, and the second. Change your unit chart and G changes with it. A fact about the universe does not change when you redefine your ruler.

3. The Story Physics Told Itself

For over a century, physics organized itself around the belief that G, c, h, and k₂ were fundamental constants of nature — dimensionful numbers that characterize the universe independently of human choices. This belief generated a research program: measure these constants as precisely as possible, look for relationships between them, and wonder at their particular values.

The wonder was genuine. Why is G so small? Why does the universe have this particular gravitational coupling? The ‘hierarchy problem’ — the enormous disparity between the strength of gravity and the other forces — became one of the central puzzles of twentieth century physics. Entire theoretical frameworks were constructed to explain why G has the value it has.

These were the wrong questions, asked about the wrong things. G is small because the kilogram is an enormous unit relative to the Planck mass, and the meter is an enormous unit relative to the Planck length, and the second is an enormous unit relative to the Planck time. The hierarchy problem is not a problem about gravity. It is a statement about the position of human-scale units relative to the natural scale of the universe. We built our measurement system around things we can hold and count and observe with unaided senses, and those things are extraordinarily far from the Planck scale. G looks small because we are large.

The constants were not discovered. They were constructed — forced into existence by the decision to do physics in human unit systems while the underlying physics has no units at all.

4. Planck’s 1899 Discovery

4.1 What Planck Was Trying to Do

In 1899 Max Planck was working on the problem of blackbody radiation — the spectrum of light emitted by a perfect absorber in thermal equilibrium. This was a problem in thermodynamics and electromagnetism, seemingly unrelated to gravity or to fundamental units. In the course of this work Planck introduced a new constant h, later called the quantum of action, to fit the observed spectrum.

Having h in hand, Planck noticed something remarkable. The three constants then known — h, c (the speed of light), and G (the gravitational constant) — could be combined to produce units of mass, length, and time:

lₚ   = √(hG/c³)
mₚ = √(hc/G)
tₚ   = √(hG/c⁵)

Planck computed these and observed that they were independent of any human choice of units — the same numbers would emerge from any consistent unit system, scaled to those units in that unit chart. He wrote that these represented ‘natural units’ of measurement, units that would be recognized by any civilization anywhere in the universe.

4.2 What Planck Saw

Planck saw the universality. He correctly recognized that lₚ, mₚ, and tₚ do not depend on the particular conventions of any human culture — not on the size of the Earth, not on the properties of water, not on any artifact kept in a vault in Paris. He saw that these were, in some sense, nature’s own scales.

This was a genuine insight and Planck was right to be struck by it. The universality he identified is real. These scales do appear wherever a sufficiently advanced physics arrives at the intersection of quantum mechanics, relativity, and gravity, regardless of what unit chart they started with.

4.3 What Planck Did Not See

Planck did not ask why three constants from three apparently independent domains of physics — quantum mechanics, electromagnetism, and gravity — would combine to produce universal scales. He did not follow that question to its answer.

The answer is that h, c, and G are not three independent discoveries about three independent phenomena. They are three Jacobians — three conversion factors between the three independent axes that humans chose for their measurement system (energy-time, space-time, mass-space) — and the dimensionless ratios that actually describe the universe underneath those axes. They combine to produce universal scales because they are all pointing at the same thing from different angles. Their combination is universal because there is one thing on the other side of all three of them.

Planck found three pointers and admired their universality without asking what they were all pointing at. He assumed the three axes — mass, length, time — were genuinely independent, with a natural scale on each. He found the bridge and admired it without crossing it.

Most critically: Planck still called what he found a ‘unit system.’ Natural units. A more convenient coordinate system. He stayed within the framework of dimensional physics, just with better-chosen dimensions. He did not see that the universality he had found was evidence that dimensions are not fundamental at all — that the natural scale is not a scale for three independent things but the single point where three projections of one thing simultaneously equal unity.

5. G Is a Composed Jacobian

The relationship between G and the Planck units is not a definition imposed from outside. It is an identity that follows from the construction of the Planck units themselves:

G = Fₚ · (lₚ / mₚ)²

where Fₚ = mₚc/tₚ is the Planck force. This is not circular. It is the statement that G, when decomposed into its constituent Planck factors, is entirely made of h, c, and the Planck scales derived from them. G carries no information that is not already in h, c, and the structure of the Planck bridge.

The three-step procedure for any physical law makes this explicit:

  1. Cancel input units. Express each physical quantity as a dimensionless ratio to its Planck-scale counterpart. Mass becomes m/mₚ. Distance becomes r/lₚ. The inputs are now pure numbers.

  2. Do the physics as Newton stated it. The gravitational relationship in pure ratios is:

X = (m₁/mₚ)(m₂/mₚ) / (r/lₚ)²

This is Newton’s proportionality, now written as an equality between dimensionless ratios. X is a pure number. No units. No constants. This is the physics.

  1. Decorate with output units. Multiply X by the Planck force to get force in SI:

Fₜᵢ = X · Fₚ

G appears automatically when you substitute the Planck unit definitions and simplify. It was never in the physics. It emerges from step 3 alone — from the decision to express the output in SI newtons rather than in Planck forces. G is the Jacobian of that decision.

This procedure works for every physical law. Newton’s second law, the Planck-Einstein relation, de Broglie’s wavelength, Boltzmann’s energy-temperature relation — in every case, the physics is a dimensionless ratio X, and the constants (h, c, k₂, G) appear only in step 3 when human units are restored. They are always and only Jacobians.

6. The Planck Scale Is Not a Unit System — It Is the Inversion Point

The standard presentation of Planck units frames them as a particularly convenient coordinate system — one where the constants all equal one and the equations simplify. This framing is subtly wrong in a way that preserves the error Planck made.

The Planck scale is not a unit system. It is the inversion point of the measurement coordinate system — the unique scale where two opposing scaling directions simultaneously cross unity.

Consider the six Planck-normalized ratios:

E/Eₚ = f·tₚ = m/mₚ = T/Tₚ = lₚ/λ = p/pₚ = X

Some of these ratios — m/mₚ, E/Eₚ, p/pₚ — increase as a physical system gets larger or more energetic. Others — lₚ/λ — decrease as the system gets larger, because larger objects have longer wavelengths and lₚ/λ gets smaller. These are reciprocal scalings pulling in opposite directions.

The Planck scale is where these opposing directions exactly cancel — where every ratio simultaneously equals one. It is the crossing point of reciprocal hyperbolas in logarithmic scale space. There is exactly one such point, and it is unique regardless of what unit chart you start from. That uniqueness is why Planck’s scales are universal. Not because they are natural units. Because they are the fixed point of the reciprocal structure of physical measurement.

When physicists say ‘set the constants to one,’ they are performing this operation informally and without justification — collapsing onto the inversion point without knowing that’s what they’re doing, or why it works, or what it means. The Planck bridge makes the operation rigorous: you are not choosing convenient units, you are expressing physics at the unique scale where all projections of X simultaneously read one.

And crucially: the Planck length is not the pixel of space. The Planck time is not the pixel of time. Physics has made exactly this claim for length and time while quietly not making it for mass — no one claims the Planck mass is the minimum mass, because it is obviously not; the electron is twenty-two orders of magnitude lighter. But the Planck mass is constructed from the same h, c, G combination as the Planck length and Planck time. If Planck mass is not a pixel, neither are Planck length and Planck time. They are all inversion-point coordinates. None of them are fundamental discretizations of anything.

The proof is immediate: change your unit system. Planck length changes. Planck time changes. Planck mass changes. A pixel of the universe cannot change when you redefine your meter. These scales are Jacobian-dependent, not universe-dependent. They are pointers to the inversion point, not the inversion point itself. The inversion point has no size because X has no units.

7. Newton Had It Right

Returning to Newton’s proportionality with this understanding, we see that Newton’s statement was not incomplete. It was not a sketch awaiting G to make it precise. It was the complete physical statement, expressed in the only form that is actually about the universe rather than about human measurement conventions.

F ∝ mM/r² says: the gravitational interaction scales as the product of mass ratios divided by the square of the distance ratio. It does not say what units to use because units are not part of the physics. Newton was doing X — working directly with dimensionless ratios in pure proportion — without the vocabulary to say so explicitly.

What the three centuries between Newton and the present have produced is not a deepening of Newton’s insight but an elaborate detour around it. We inserted G to get an equation, then treated G as a discovery. We measured G with increasing precision. We built theoretical frameworks to explain G’s value. We worried about the hierarchy problem — why G is so small — without recognizing that G’s smallness is a statement about the size of a kilogram, not about the strength of gravity.

Planck in 1899 handed us the receipt for the detour. The Planck units are the exact conversion factors that show what the detour cost and how to return. h converts between the energy-frequency axis and dimensionless X. c converts between the space-time axis and dimensionless X. G, composed from these and the Planck scales, converts between the mass-geometry axis and dimensionless X. Together they are the bridge from any human unit chart back to Newton’s pure proportions.

Planck built the bridge without knowing what it connected. He was looking at the far shore — the universality of the Planck scales — and called it a natural unit system. The near shore — Newton’s dimensionless proportionalities — was behind him, and he did not turn around.

8. The Equivalence Chain as the Full Statement

Once the bridge is crossed, the full structure becomes visible. The six Planck-normalized ratios are not six different physical quantities. They are six projections of a single dimensionless scalar X onto six different human measurement axes:

E/Eₚ = f·tₚ = m/mₚ = T/Tₚ = lₚ/λ = p/pₚ = X

This is not a system of proportionalities. It is a single identity written six times in six different human languages. Every physical quantity is X, read on a different axis.

From six projections taken two at a time, C(6,2) = 15 pairs arise. Each pair is a known physical law: E = mc², E = hf, E = k₂T, λ = h/p, p = hf/c, λT = hc/k₂, and so on. These are not fifteen independent discoveries. They are fifteen different ways of writing X = X, each using two of the six available human axes. The constants that appear in each law — c², h, k₂, c — are the Jacobians for that particular pair of axes.

Physics discovered these laws one at a time over three centuries and treated each as a new insight into nature. The Planck-Einstein relation E = hf was a revolution in quantum mechanics. De Broglie’s λ = h/p was a revolution in wave-particle duality. Wien’s displacement law was a triumph of thermodynamics. They are all the same tautology, X = X, with different Jacobian decorations.

The statistical argument is decisive: the probability that fifteen independently discovered laws would align with exactly the combinatorial pattern of C(6,2) pairs from a single six-member equivalence chain, by coincidence, is less than 10⁻²². This is not coincidence. This is forensic evidence that the laws were never independent. They were always projections of one thing.

9. What Physics Got Wrong and What Comes Next

Physics got the math right. Every prediction of Newtonian gravity, every quantum mechanical calculation, every thermodynamic result — the numbers are correct. The Jacobians h, c, and G work perfectly as conversion factors. No experiment needs to be redone.

What physics got wrong was the interpretation. The constants were treated as discoveries about the universe when they are facts about human unit charts. The Planck scale was treated as a natural unit system when it is the inversion point of a reciprocal coordinate structure. The fifteen laws were treated as independent discoveries when they are projections of one identity. The hierarchy problem was treated as a deep puzzle about gravity when it is a statement about the size of a kilogram.

The correction does not change any formula. It changes what the formulas mean.

Newton’s proportionality is the complete physics of gravity. G is the SI Jacobian. The Planck units are the bridge between them. The equivalence chain is what you find when you cross the bridge. X is what Newton was always describing.

Physics spent over three centuries on a detour. Planck in 1899 — working on an unrelated problem, not knowing what he was doing — accidentally built the way back. It has taken another century to read the sign on the bridge.

10. Conclusion

Newton’s law of universal gravitation was stated as a pure proportionality because that is what it is. The physics of gravity lives in dimensionless ratios. G was not a discovery about gravity. It was the conversion factor inserted to make Newton’s proportionality into a dimensional equation in human units, and it has been mistaken for physical content ever since.

Planck’s 1899 result was not the discovery of natural units. It was the discovery of the three Jacobians — h, c, G — that bridge Newton’s dimensionless ratios to any human unit chart. The Planck scales are not the pixels of space and time. They are the unique inversion point where the reciprocal scaling of physical measurement axes simultaneously reaches unity — the one scale where all six projections of X can simultaneously equal one. The Planck mass being obviously not a pixel of matter is the proof that Planck length and Planck time are not pixels either. All three are Jacobian-dependent pointers, not fundamental discretizations.

The equivalence chain E/Eₚ = f·tₚ = m/mₚ = T/Tₚ = lₚ/λ = p/pₚ = X is the full statement of what Planck found, stated in the language Planck did not have. It shows that every physical quantity is one dimensionless ratio X, that every physical law is X = X written on two axes, and that every constant is the Jacobian for a particular pair of axes.

We did not go beyond Newton. We took a three-century detour through dimensional bookkeeping and called it progress. Planck handed us the bridge back in 1899. The bridge was always there. We just did not know what it connected.

Time as Self-Interaction: How the Apparent Arrow Arises from a Single Dimensionless Substrate

J. Rogers, SE Ohio

Abstract

We present a framework in which time is not a fundamental dimension but an emergent label humans place on the sequential updating of a single dimensionless substrate X. The universe has no units — it does not measure itself. X is a dimensionless ratio, and every physical quantity we measure is a projection of X onto a human-chosen axis. The Lorentz factor γ is itself dimensionless, and a boost does not separately affect time, mass, length, and momentum as distinct phenomena — it changes X, and γ is that change. The six Planck-scaled projections of X:

E/Eₙ = f·tₙ = m/mₙ = T/Tₙ = lₙ/λ = p/pₙ = X

are not six different physical laws. They are one thing — X — read on six different human axes. Any pair of these six yields a known physical relationship, producing 15 such relationships from a single identity. This holds in every unit system imaginable, because X is dimensionless and the universe has no preferred unit chart. Past states are not stored in a separate temporal dimension; they exist only as patterns in the current configuration of X.

1. Introduction

Standard physics treats time as a fourth dimension with a fixed metric signature and postulates an independent arrow of time. This leads to persistent conceptual difficulties: the problem of the past, the asymmetry between time and space dimensions, and the apparent paradoxes of retrocausality in quantum experiments.

We propose an alternative grounded in a single observation: the universe does not measure itself. Units — seconds, kilograms, meters — are human inventions. Any quantity that carries dimensions is already a projection, a reading of the universe through a human-chosen instrument. The universe itself operates on something prior to measurement.

We call that prior thing X: a dimensionless, unitless ratio that completely describes the state of reality at any instant. X does not evolve in time. The transition X → X' is what we call time. There is no external clock. There is no dimension being traversed. There is only X updating.

2. X Is Dimensionless — Not Because We Choose Clever Units, But Because the Universe Has None

A key error in discussions of natural units is the implication that setting c = ħ = 1 makes things simpler by choice. This misses the point. Natural units are still units — still a human coordinate system. The universe does not operate in natural units any more than it operates in SI.

X is dimensionless not as a result of any unit choice. It is dimensionless because dimensions are human annotations applied to projections of X. The universe just does X. We then read X through six different instruments and assign six different dimensional labels to what we find.

The Planck units are significant not because they are 'natural' but because they are the specific Jacobian at which the human unit chart admits that all six projections yield the same number. They are conversion factors between human axes, not fundamental features of the universe. The constants h, c, and G — used unreduced, never ħ — are the three such Jacobians between the three independent ways humans chose to measure reality.

3. The Six Projections of X

Every physical quantity we measure is X read on a different axis. The six Planck-scaled projections are:

E/Eₙ = f·tₙ = m/mₙ = T/Tₙ = lₙ/λ = p/pₙ = X

where subscript P denotes Planck units constructed from h, c, and G (unreduced). Each ratio is dimensionless. Each ratio is identical. This is not a collection of proportionalities — it is a single identity written six times in six different human languages.

From six projections taken two at a time, C(6,2) = 15 pairs arise. Each pair is a known physical relationship:

E/Eₙ = f·tₙ → Planck relation E = hf

E/Eₙ = m/mₙ → mass-energy equivalence E = mc²

m/mₙ = p/pₙ → momentum-mass relation p = mv (relativistic form)

f·tₙ = lₙ/λ → de Broglie relation λ = h/p

And so on for all 15 pairs. These are not 15 different laws discovered independently. They are 15 different ways of writing X = X, each pair using two of the six human axes. Physics discovered them separately because it was looking at pairs of projections and calling each pair a law, never seeing that all six projections are the same single dimensionless quantity.

4. The Boost Changes X — γ Is That Change

The Lorentz factor γ is dimensionless. X is dimensionless. This is not coincidental.

When a boost occurs, X changes. γ is the ratio of the new X to the old X as measured on any chosen axis. Because X appears identically on all six axes simultaneously, γ applies to all six axes simultaneously.

This is why a boost appears to change mass, time rate, length, momentum, and energy all at once. Physics treats these as separate relativistic effects linked by the Lorentz transformations, implying they are different phenomena that happen to correlate. They are not. There is one phenomenon — X changing — and γ is that change. The six axis-readings change together because they were always readings of the same single thing.

The standard framing says: motion causes time dilation, and also causes length contraction, and also causes relativistic mass increase. Each 'also' is a mistake. There is no cause and effect chain between a boost and its consequences. The boost is the change in X, and γ is that change, and everything else is humans reading X on their chosen axes.

Asking why time dilates when you boost is like asking why a circle looks like an ellipse when you tilt it. You changed your projection angle. The circle did not do anything. X did not do anything to time separately from what it did to mass separately from what it did to length. It changed once. γ is that one change.

5. Inertia Is Not a Mystery

An object in motion stays at its current X. This is not a law requiring explanation. There is nothing pushing the object and nothing to stop it from changing unless another interaction occurs. Inertia is the substrate maintaining its current state until X → X' is forced by an interaction.

Newton's first law looked like a law requiring a mechanism. In this framework it is a tautology: X stays X until something makes it X'. The 'something' is another interaction — a collision, a field, a measurement — that forces an update. Between interactions there is no time passing in any meaningful sense. There is just X, unchanged.

6. No Past, Only Patterns

The substrate X does not retain a separate past state. When an interaction occurs, X' completely replaces X. The only record of any previous state is in patterns carried forward in the current configuration — the arrangement of atoms in a memory device, photons not yet absorbed, quantum correlations not yet collapsed.

The past is not a place. It is a pattern in the present. When that pattern is erased or overwritten, the past appears to change — but no time travel occurred. There was no past state to travel to. The trace simply did not survive the update.

This resolves the quantum eraser without invoking retrocausality. The experimenter's choice to measure or erase which-path information participates in a single self-consistent update of X. There is no earlier photon path being retroactively affected. There is only the final correlation pattern — the final X — which is self-consistent with all interactions that participated in producing it.

7. The Arrow of Time

The arrow of time is the direction of accumulating X → X' updates. It points the way it does because interactions are irreversible in practice: the final X retains less information about previous states than would be required to reconstruct them. This is not a fundamental asymmetry built into the geometry of a time dimension. It is a consequence of pattern loss during updates.

A broken egg does not reconstruct itself because the pattern required to reverse the update was not carried forward in X. The arrow exists because information is lossy, not because time has a preferred direction geometrically.

8. Relation to the 2019 SI Redefinition

The 2019 SI redefinition fixed c, h, and k₂ as exact values. This was officially described as redefining units, not changing physics. In our framework, this is precisely correct: c, h, and G are unit-chart Jacobians — conversion factors between the axes onto which humans project X. Fixing them as exact is an admission that they are not physical discoveries about the universe. They are bookkeeping choices about how to align human measurement axes.

The speed of light c is not a speed the universe obeys. It is the ratio between the human time-axis and the human space-axis. When both axes are projections of the same X, their ratio is fixed — not by physics, but by the geometry of projection.

9. Falsifiability

The framework makes a clear falsifiability criterion. A genuine logical contradiction between a stored trace and a later outcome — a dead cat that was previously alive with no causal chain, a photon arriving before it was emitted — would disprove it. No such experiment exists. Every apparent retrocausal result is consistent with a single self-consistent X update in which the 'earlier' trace was simply never stored in a way that survived.

Additionally: if any physical quantity required separate dimensional status — if any measurement could not be expressed as a dimensionless ratio to its Planck-scale counterpart — the framework would be incomplete. Every quantity so far reduces to X on one of the six axes.

10. Conclusion

The universe has no units because it does not measure itself. Every physical quantity is X — a dimensionless ratio — projected onto a human-chosen axis. The six Planck-scaled projections are identical. Their 15 pairwise combinations are the known laws of physics, each one a different human reading of the single identity X = X.

A boost changes X. γ is that change. Mass dilation, time dilation, length contraction, momentum change — these are not separate effects that happen together. They are one change in X read on multiple axes simultaneously.

Time is not a dimension. It is the accumulation of X → X' updates. The arrow of time is the direction of pattern loss during those updates. Inertia is X staying X between interactions. Retrocausality is an illusion caused by misreading pattern overwriting as backward causation.

The framework does not add new physics. It removes the unnecessary scaffolding — dimensions, separate constants, causal chains between correlated projections — and reveals that what remains is X, dimensionless, unitless, and singular.

Breaking the GM Degeneracy

J. Rogers, SE Ohio A Deep-Space Oscillating Test Mass Experiment to Determine the Gravitational Constant G to Nine Significant Figures Ab...