Mastodon Politics, Power, and Science

Tuesday, May 5, 2026

The Ultimate Legal Paradox: Why Colonial Land “Purchases” Can Never Be Valid

J. Rogers, SE Ohio

The Core Contradiction

You cannot invoke a legal system that does not yet have jurisdiction to validate a transaction that is supposed to give that system jurisdiction. This is a classic circulus in probando — a circular proof that collapses under its own weight.

The Argument Restated Formally

Premise 1: Before any European “sale” or “treaty,” the land in North America was under the sovereign jurisdiction of the Indigenous nations who inhabited it. Under international law as it was understood even by Europeans at the time, sovereignty follows occupation and governance.

Premise 2: Any transaction concerning land must be governed by the law of the jurisdiction where the land sits at the moment of the transaction. (This is a fundamental principle of property law known as lex rei sitae — the law of the place where the property is located.)

Conclusion A: Therefore, any attempted land purchase in North America before European sovereignty was established was governed exclusively by Indigenous law.

Premise 3: Under Indigenous law, as we have established: - Land was not private property but a communal trust - No individual (including a chief) had unilateral authority to alienate land - “Sales” of land in the European sense were legally impossible

Conclusion B: Therefore, under the only law that had jurisdiction at the time, no valid land sale ever occurred.

Premise 4: European law — English common law, Dutch Roman-Dutch law, etc. — had no force on Indigenous soil until European sovereignty was established. But European sovereignty could only be established by valid cession (treaty or purchase) or conquest.

Premise 5: Conquest was not legally claimed in most of these transactions (the colonists consistently denied they were conquering; they claimed they were purchasing).

Conclusion C: Therefore, European law never properly took effect, because the jurisdictional “trigger” — valid cession — never occurred.

The Catch-22 Stated Plainly

If colonists said… Then the result is…
“Indigenous law governs this transaction” The sale is impossible because Indigenous law prohibits selling land.
“European law governs this transaction” European law had no jurisdiction yet, so the transaction is void ab initio.

There is no third path. Every colonial deed is trapped in this double-bind.

The Doctrine of Discovery Contradiction

We also correctly identified the fatal inconsistency in the “Doctrine of Discovery” — the papal bulls and royal charters that supposedly granted European monarchs title to lands they “discovered.”

  • If the Doctrine of Discovery gave full title to the European crown automatically upon discovery, then no negotiation with Indigenous peoples was necessary or legally meaningful. The land was already owned. The deeds were theatrical.
  • But the colonists did negotiate, did hand out beads, and did collect signatures. Those actions are legal admissions that they did not already hold title. By negotiating, they recognized Indigenous sovereignty.

You cannot have it both ways. Either the land was already European (in which case the deeds are meaningless props) or the land was Indigenous (in which case Indigenous law governs the transaction). The colonial legal position requires a logical contradiction.

The “Law of the Land” Argument

Invoking the Magna Carta’s lex terrae — the law of the land. This is devastating because it turns the colonists’ own legal heritage against them.

The Magna Carta (1215, clause 39) guaranteed that no free man could be dispossessed of his property except by “the lawful judgment of his peers or by the law of the land.” When English colonists arrived in North America, the “law of the land” — the existing, operative legal system — was Indigenous. To dispossess Indigenous peoples, the colonists would have needed to follow that law.

They did not. They could not. Because that law did not permit what they wanted to do.

The Only Logical Conclusion

The land was never sold. There is no legal theory — Indigenous or European — under which these transactions can be validated. The beads bought nothing. The deeds are not contracts; they are artifacts of a legal nullity, preserved only because the party with superior military force chose to treat them as real.

This is not merely a historical curiosity. It has modern implications for land claims, treaty rights, and restorative justice. If the original “sales” were void ab initio, then the underlying Indigenous title was never extinguished. The land never legally changed hands.

Final Statement

We have identified the foundational fraud of the colonial era — not just a moral fraud, but a legal fraud so complete that it invalidates every deed, every treaty, every “purchase” from the first bead to the last signature. The colonists built their property systems on a jurisdictional paradox, and no amount of subsequent legislation or court rulings can make a void transaction valid.

The fleas never bought the dog. They just claimed they did, and they had the guns to make the claim stick. But legal truth and military power are not the same thing.

“A Want of Understanding”: Why Indigenous Land “Sales” to European Colonists Were Legally Void From the Start

J. Rogers, SE Ohio

Abstract

The common colonial narrative that Native American peoples “sold” Manhattan, Pennsylvania, and countless other territories for trinkets like glass beads rests on a series of legal fictions. This paper argues that from the perspective of Indigenous legal systems—and even from fundamental principles of Western contract law—no valid land sale ever occurred. The paper examines four independent and mutually reinforcing grounds for voiding these transactions: (1) the Indigenous legal framework of land stewardship versus personal ownership; (2) the lack of unilateral authority in Indigenous governance, including the inability of any single chief to alienate land; (3) a fundamental “want of understanding” (lack of mutual assent) between the parties; and (4) the grossly unconscionable disparity in exchange value. Together, these factors demonstrate that the beads bought nothing.

Introduction

“The Indians sold Manhattan for beads” is a phrase taught to generations of schoolchildren. It implies a transaction: willing sellers, willing buyers, a fair exchange. But a closer examination reveals the phrase to be a colonial justification for dispossession, not a description of reality. The legal systems of the Indigenous peoples of North America did not recognize individual land ownership, did not empower any single leader to alienate territory, and did not conceive of land as a commodity to be sold. When Europeans presented deeds for signature, the two sides operated under what legal scholars call a “category error”—they were not speaking the same language of property. As a result, any purported “sale” fails the most basic tests of contract validity.

Most Indigenous nations east of the Mississippi and across the Great Plains—including the Lenape, Haudenosaunee (Iroquois), Cherokee, and Ojibwe—held a relational view of land. People did not own the land; rather, they belonged to it. Land was a living relative, a provider, and a trust for future generations. Individual or family rights were usufructuary: the right to hunt, fish, plant, and gather on specific tracts, but not to permanently transfer or exclude others from the territory as a whole.

This contrasts sharply with the European concept of dominium—exclusive, alienable, private property that could be bought, sold, and inherited like a coat. For an Indigenous person, asking “Who owns this land?” was as nonsensical as asking “Who owns the air?” or “Who owns tomorrow?” The very question presupposed a framework that did not exist.

II. No Unilateral Authority: The Chief Could Not Sell

Even if one imagines a chief who somehow adopted the European concept of sale, that chief lacked legal authority under Indigenous law to complete the transaction. Indigenous governance was consensual and distributed.

  • Haudenosaunee (Iroquois) Confederacy: Land decisions required deliberation among the Grand Council of clan mothers and chiefs. No single sachem could cede territory.
  • Lenape (Delaware) villages: Consensus-based councils of elders and heads of families made decisions affecting the collective. A chief was a facilitator and steward, not a sovereign with unilateral power.
  • Plains nations (Lakota, Cheyenne): Land was communal. Even a renowned war chief could not sell hunting grounds without the agreement of the band.

Colonial deeds often bear the mark or signature of a single individual whom Europeans called a “chief.” In many cases, that person was not an authorized leader at all—perhaps a low-ranking villager, a visitor, or someone the colonists had plied with alcohol. But even when the signer was a genuine leader, his action was ultra vires (beyond his legal powers). Under Indigenous law, the deed was void from the moment it was signed.

III. Want of Understanding: No Meeting of the Minds

Western contract law requires consensus ad idem—a meeting of the minds. Both parties must share a common understanding of the essential terms. In the land “sales” between colonists and Indigenous peoples, no such meeting occurred.

To the European, the transaction meant: “You permanently and forever give us this territory. You and your descendants will leave, and we will have exclusive ownership and sovereignty.”

To the Indigenous participant (to the extent they understood the European framework at all), the exchange likely meant: “You have given us gifts (beads, cloth, axes). In friendship, we will share the land. You may pass through, hunt, or plant alongside us. The land remains ours and our children’s.”

This is not a minor misunderstanding. It is a complete divergence on the nature of the right being transferred. Many Indigenous leaders later testified that they believed they were agreeing to shared use or a military alliance, not a permanent cession of their homeland. The colonial powers, for their part, rarely attempted to explain the European concept of exclusive, permanent sale—because if they had, the transaction would never have occurred.

IV. Lack of Reasonable Exchange: Unconscionable Disparity

Even if one disregards the cultural and legal mismatches, the exchange itself betrays fraud. A “box of glass beads” was worth a trivial sum—perhaps the equivalent of a few dollars in modern currency. Millions of acres of land, with timber, water, game, minerals, and agricultural potential, were worth an astronomical fortune.

Under Western contract law, such a one-sided exchange can be voided for unconscionability. A court may also infer fraud or undue influence where one party is clearly vulnerable and the other takes grossly unfair advantage. The colonists were sophisticated traders who knew the value of beads and the value of land. The Indigenous participants were often unfamiliar with European concepts of value, writing, and permanent alienation. The disparity is so extreme that no reasonable person could believe it represented a voluntary, informed, arm’s-length bargain.

The beads were not payment. They were a prop—a token offered to create the illusion of a transaction where none existed.

Conclusion: The Beads Bought Nothing

The story of “Indians selling land for beads” persists because it serves a comforting narrative: that dispossession was peaceful, consensual, and fair. The historical and legal truth is otherwise. Indigenous legal systems did not permit individual ownership of land. No chief had unilateral authority to sell. The two sides shared no common understanding of what a “sale” meant. And the paltry value of the goods exchanged compared to the land itself is evidence of fraud, not good faith.

Under Indigenous law, the “sales” were void ab initio (from the beginning). Under fundamental principles of Western contract law—lack of mutual assent, lack of authority, and unconscionability—they were equally void. The beads bought nothing. The land was never sold. It was taken, and the deeds are not contracts but artifacts of theft.

Author’s Note

This paper does not argue that all treaties between Indigenous nations and colonial powers were void; some later treaties, negotiated under greater mutual understanding and with genuine consent, may have had legal force. But the early, small-scale “sales” for trinkets—the foundation of the myth—fail every test of validity. Recognizing this is not merely an academic exercise. It is an act of historical justice.

Monday, May 4, 2026

Breaking the GM Degeneracy

J. Rogers, SE Ohio

A Deep-Space Oscillating Test Mass Experiment to Determine

the Gravitational Constant G to Nine Significant Figures

Abstract

The gravitational constant G remains the least precisely known fundamental constant in physics, determined to only approximately five significant figures after three centuries of effort. We identify the structural cause: G and the mass M of any gravitating body large enough to produce a measurable field are observationally inseparable. Every astronomical measurement yields only the product GM. Cavendish-style laboratory experiments have failed to converge across 300 years. We propose a conceptually simple resolution: construct a toroidal mass of precisely known composition in deep space, far from competing gravitational sources, bore a hole through its center, and drop a test mass through the hole. The test mass oscillates indefinitely under gravity alone, with period determined entirely by the known surface mass density of the toroid. An LED-based laser interferometer simultaneously tracks oscillation period T, instantaneous acceleration a(t), and distance r(t) from the center hole continuously throughout each oscillation, providing three independent and overdetermined routes to G from a single data stream. Because the mass is known from construction rather than inferred from gravity, the GM degeneracy is broken for the first time. The test mass is repeatedly lifted, settled, and released, accumulating thousands of independent period measurements per run over a mission lifetime exceeding one year on station. Tidal contamination from the Sun at 3 AU is verified by calculation to be 1.1 × 10⁸ times smaller than the measurement signal, and galactic tidal forces are 6.7 × 10⁷ times smaller. The deep space environment is not merely quiet — the rest of the universe has gone effectively silent. All required technologies are flight-proven. The mission requires no new physics and no new engineering principles.

1. The Problem: GM is What Nature Exposes

Newton's law of gravitation is conventionally written as:

F = G M m / r²

and general relativity encodes gravitational geometry through terms of the form GM/c²r. In both frameworks, G and M appear as a product. This is not a mathematical convenience — it reflects a deep operational fact: no measurement that relies on gravity to establish the behavior of a gravitating body can separate G from M. The observable is always GM.

NASA's operational practice makes this explicit. Planetary ephemerides, spacecraft navigation, and GPS relativistic corrections all use the gravitational parameter mu = GM, known for Earth to approximately nine significant figures. The individual values of G and M are not used. They cannot be, because the mass of any astronomically significant body is inferred from its gravitational behavior, making any G determination from astronomical sources tautological. If we knew the mass of the Earth independently, we would know G to identical precision. We do not, and the circularity is exact and complete.

G is not philosophically uncertain. It has an exact value. Nature knows it precisely. The uncertainty is entirely on the measurement side, and the measurement side has a specific, identifiable structural problem that has gone unresolved for three centuries.

2. The Failure of Laboratory Methods

The Cavendish torsion balance, introduced in 1798, was designed to escape this circularity by using laboratory-scale masses whose weight could be determined independently through mechanical means. After 300 years of refinement across dozens of independent experiments at leading metrology institutes worldwide, the results do not converge.

Recent high-precision determinations of G disagree with each other by 40 to 50 parts per million, while individual experiments claim uncertainties of 10 to 20 parts per million. The discrepancy between results exceeds the claimed precision by a factor of two to five. CODATA periodically widens the accepted uncertainty interval to accommodate the spread.

The fundamental difficulty is the signal-to-noise environment. Gravity is the weakest force. Laboratory-scale masses produce gravitational forces at or below the level of seismic noise, thermal expansion, electrostatic coupling, and the gravitational influence of nearby structures. No experimental design has succeeded in isolating the gravitational signal cleanly from this noise floor at the required precision. Three hundred years is sufficient to conclude this is not a solvable engineering problem in the terrestrial environment.

3. The Consequence: A Blurred Axis

The inability to determine G precisely propagates directly into every dimensionless ratio that crosses the gravitational-electromagnetic interface. The Planck mass is defined as:

m_P = sqrt(hbar * c / G)

and inherits the full uncertainty of G. The fine structure constant alpha is known to twelve significant figures. The ratio m_p/m_P — proton mass to Planck mass — is a fundamental dimensionless number that any unified theory must address. It is known to only five significant figures, not because of any difficulty on the electromagnetic side, but entirely because G limits the determination of m_P.

Numerical computation confirms the scaling precisely. Across the current uncertainty range of G — approximately 15 parts per million — the natural unit scale derived from hbar, c, and G shifts by 7.5 parts per million, following the theoretical scaling X proportional to G^(-1/2) exactly. The internal self-consistency within any fixed value of G is maintained to floating-point precision (~10^-16). The geometry is exact. The uncertainty is entirely in our measurement access to G.

G is a unit conversion factor — the bridge between our independently-defined kilogram and the natural geometry of gravity. Like c, it would be 1 by construction if our mass unit had been defined gravitationally. The constants G and c are not deep truths about nature. They are artifacts of defining length, time, and mass independently using different physical processes. The scandal is not that G is uncertain. The scandal is that we have treated a units alignment problem as a fundamental mystery for three centuries.

Any theory proposing an exact relationship between m_p/m_P and alpha cannot be tested better than five significant figures. The relationship may be exact, numerically sitting in plain sight, and we cannot resolve it. This is structurally identical to how epicyclic astronomy concealed elliptical orbits for over a millennium. Epicycles made accurate predictions. The system was internally consistent. And buried inside that predictive success was exact geometry the representation could not expose. GM is our epicycle. The exact dimensionless ratios connecting gravity to electromagnetism are hidden inside a perfectly predictive framework that cannot expose them.

4. The Proposed Experiment

4.1 Core Concept

The GM degeneracy is broken by one structural change: use a gravitating body whose mass is known from construction, not from its gravitational behavior. A body assembled from measured components in a zero-gravity environment has a mass determined by the sum of its parts, each weighed through the metrological chain anchored to the SI kilogram, independently of gravity. This mass M is known before the gravitational measurement begins.

A flat toroidal disk — a large washer — with a hole bored through its center axis is the chosen geometry. A test mass m is dropped through the hole. It oscillates back and forth through the disk under gravity alone. The period of this oscillation depends only on the surface mass density sigma of the disk, which is known from construction. The measurement of G reduces to a measurement of oscillation period, acceleration, and distance — all tracked simultaneously by a single LED interferometer.

4.2 Physics of the Toroidal Oscillator

For a uniform infinite plane of surface mass density sigma, the gravitational acceleration is constant on both sides:

g = 2 * pi * G * sigma

independent of distance from the surface. A test mass released from rest at height h above the disk undergoes simple harmonic motion with period:

T = 2 * pi * sqrt(h / (2 * pi * G * sigma))

The critical feature is that period is independent of amplitude. As the oscillation slowly damps, T remains constant. Every cycle from first to last gives the same measurement of G. The experiment does not degrade as energy dissipates. The LED interferometer simultaneously tracks three independent observables throughout every oscillation:

— T: oscillation period, from precise timing of successive zero crossings at the disk plane

— a(t): instantaneous gravitational acceleration, from the second time derivative of interferometric position

— r(t): distance from the center of the hole at every instant, providing continuous geometry verification and confirming the mass distribution model

These three observables are overdetermined for a single unknown G. Their mutual consistency throughout each oscillation provides direct internal systematic error estimation with no additional apparatus. Any unmodeled perturbation shows up as inconsistency between the three routes to G before it corrupts the result.

4.3 Washer Construction: Batteries as Known Mass

The toroidal disk is constructed from cylindrical battery cells packed into a toroidal form and encased in a precision metal shell. The batteries serve a dual purpose: they are the power source for the mission and they constitute the primary known mass. No dead-weight ballast is carried. Every kilogram of structural mass is simultaneously delivering power.

The metal shell provides a uniform, precisely characterized outer geometry. Shell thickness is known. Battery cell geometry and individual mass are measured before assembly. Total mass M is the sum of precisely accounted components, all measured on the ground before launch. The surface mass density sigma = M / pi(R_outer^2 - R_inner^2) is known to the precision of the pre-launch mass accounting.

Battery discharge does not meaningfully change the mass. The chemical energy released is accounted for by E = mc^2 at a level far below the measurement floor. Mass M is effectively constant throughout the mission lifetime, and any slow drift is trackable from power consumption telemetry.

4.4 LED Interferometer

The measurement instrument is an LED-based Michelson interferometer. A narrow-bandwidth LED with a bandpass filter provides sufficient coherence length for path differences involved in tracking oscillation amplitudes of order one meter. Power consumption is in the milliwatt range. No laser cooling, no frequency stabilization, and no moving optical components are required.

At 3 AU from the Sun the sky is dark. There is no solar background to filter against. Starlight provides negligible interferometric noise. The deep space environment is here an advantage: the LED signal completely dominates the detector with no competing background illumination.

The test mass carries a retroreflector. The interferometer arm length changes as the test mass oscillates, producing fringes that encode position continuously. Zero crossings at the disk plane are timed to atomic clock precision. The combination of T, a(t), and r(t) as continuous functions throughout each oscillation delivers G from three independent routes simultaneously from a single passive optical system.

4.5 Operational Procedure

The test mass is lifted to a specified height above the hole center and held. The LED interferometer confirms the test mass is at rest — zero velocity, stable position — before release. The actuator releases. The test mass falls through the hole, decelerates on the far side, returns, and oscillates. The interferometer tracks every cycle continuously.

The oscillation runs until amplitude has decayed to near the noise floor or lateral drift approaches the hole wall. The actuator catches the test mass, lifts it back to the starting height, and waits for the interferometer to confirm stillness. A new run begins. This cycle repeats throughout the on-station mission phase.

Each run provides thousands of independent period measurements. Each lift-settle-release cycle is independently characterized. Run-to-run consistency is a direct systematic error check. The experiment is not a one-shot measurement — it is the same experiment performed thousands of times with the same apparatus in a zero-noise environment, accumulating statistics continuously for over a year.

4.6 Mission Architecture

The washer payload is delivered to the target location by a conventional booster. On arrival the booster releases the washer and fires away to a minimum separation distance of 100 kilometers. At this separation the booster's gravitational influence on the test mass oscillation is negligible, and the booster acts as a radio relay — receiving low-power data from the washer and forwarding it to Earth at full deep space communication power. The booster requires no further maneuvering and recedes on a diverging trajectory.

Following separation, the washer is left to settle. Vibration from separation, thermal distortion as the system reaches radiative equilibrium, and any residual rotation are all monitored by the LED interferometer and allowed to damp naturally. This settling phase may take weeks to months. Formal measurement runs do not begin until the interferometer confirms the system is at rest to the required precision. The settling period characterizes the system in detail and verifies the mass distribution model against the measured gravitational field geometry before any G determination is attempted.

The target location is at or beyond 3 AU from the Sun. The solar tidal gradient across the experimental apparatus at this distance is verified by calculation (Section 5) to be more than 100 million times smaller than the measurement signal. The precise location is determined by a standard orbital mechanics trade between mission delta-V and acceptable tidal contamination. Minimum on-station measurement duration is one year.

4.7 Power Budget

The LED interferometer operates in the milliwatt range. The test mass actuator draws power only during brief lift and release operations. The onboard computer handles data logging and interferometer control at low clock rates. The radio transmitter to the booster relay at 100 kilometers requires trivial power at that range. Total payload power budget is estimated at 20 to 30 watts continuous. The toroidal battery mass, sized for the known mass requirement of the experiment, provides this power for the required mission lifetime with margin.

5. Tidal Contamination Analysis

The primary concern for any deep space gravitational experiment is contamination from external tidal forces. We calculate the tidal acceleration from all significant sources across the 1-meter scale of the apparatus and compare to the measurement signal.

5.1 Signal Strength

For a toroidal mass M = 5000 kg with a test mass at distance r = 1 meter from the center hole, the gravitational acceleration constituting the measurement signal is:

a_signal = G*M / r^2 = (6.674e-11)(5000) / (1)^2 = 3.34e-7 m/s^2

This is the reference against which all tidal contaminations are compared.

5.2 Solar Tidal Force

The tidal acceleration from the Sun across an apparatus of length delta_r is:

a_tidal = 2 * G * M_sun / R^3 * delta_r

where R is the heliocentric distance. At 1 AU this gives 7.93 × 10⁻¹⁴ m/s² across 1 meter — already 4.2 million times smaller than the signal. The tidal force scales as 1/R³, so at 3 AU it drops by a further factor of 27:

a_tidal,Sun (3 AU) = 2.94e-15 m/s^2

This is 113 million times smaller than the measurement signal. As a fraction of the signal it represents 8.8 parts per billion — well below the 1 part per billion threshold required for nine significant figures in G.

5.3 Galactic Tidal Force

The local galactic tidal acceleration is known from stellar dynamics and pulsar timing studies to be approximately 5 × 10⁻¹⁵ m/s² per meter of apparatus length. For the 1-meter scale of this experiment:

a_tidal,Galaxy = 5.00e-15 m/s^2

This is 67 million times smaller than the signal — comparable to the solar tidal and equally negligible.

5.4 Planetary Tidal Forces

Jupiter, the most massive planet, presents a tidal acceleration across 1 meter of approximately 1.18 × 10⁻¹⁸ m/s² at a conservative minimum separation of 4 AU. This is 280 billion times smaller than the signal and requires no further consideration.

5.5 Summary

Table 1 summarizes all tidal contamination sources. The worst-case external contamination — the galactic tidal force — is 67 million times smaller than the measurement signal. At 3 AU, the rest of the universe has gone effectively silent. This is not a marginal improvement over the terrestrial environment. It is a qualitative change in what measurement is possible.

Source Tidal Acceleration (m/s²) Ratio to Signal Orders of Magnitude Below Signal
Toroid 5000 kg at 1 m (Signal) 3.34 × 10⁻⁷ 1 (reference)
Sun at 1 AU across 1 m 7.93 × 10⁻¹⁴ 4.2 × 10⁶ × smaller 6.6
Sun at 3 AU across 1 m 2.94 × 10⁻¹⁵ 1.1 × 10⁸ × smaller 8.1
Milky Way galaxy across 1 m 5.00 × 10⁻¹⁵ 6.7 × 10⁷ × smaller 7.8
Jupiter at 4 AU separation across 1 m 1.18 × 10⁻¹⁸ 2.8 × 10¹¹ × smaller 11.4

Table 1. Tidal contamination at 3 AU compared to measurement signal (M = 5000 kg, r = 1 m, delta_r = 1 m).

The solar tidal force at 3 AU, as a fraction of signal, is 8.8 parts per billion. For reference, nine significant figures of precision in G requires controlling systematics to 1 part per billion. The solar tidal is below this threshold by a factor of nearly 9. For experiments requiring fewer than nine figures of precision, 3 AU provides ample margin. For the full nine-figure target, an orbit at 4 AU reduces solar tidal contamination by a further factor of 2.4, comfortably below the threshold.

6. Statistical Power of Repeated Oscillation

The fundamental advantage of this experiment over every previous G determination is statistical accumulation. A single period measurement T has some uncertainty epsilon from timing precision and environmental noise. After N independent cycles, the uncertainty on the mean period is epsilon / sqrt(N).

In a one-year on-station mission with oscillation periods of order minutes, the number of measurable cycles is of order tens of thousands per run and millions across the full mission. The statistical reduction factor sqrt(N) is of order 1000. Random errors that would limit a single measurement to five significant figures are beaten down to nine or more by accumulation alone.

No ground-based experiment has ever had this. Cavendish apparatus yields one measurement per configuration. Resets are slow and noisy. N never gets large. The noise floor never drops because statistics never accumulate. Here N is limited only by mission lifetime. The experiment improves continuously as long as the apparatus operates.

The three simultaneous observables — T, a(t), and r(t) — provide independent routes to G from the same data stream. Their mutual consistency serves as a continuous systematic error monitor throughout the mission. Any unmodeled perturbation that would corrupt one observable will show up as inconsistency among all three before it biases the G determination.

7. Scientific Return

A determination of G to nine significant figures immediately propagates precision improvement through every dimensionless ratio in physics that involves gravity. The Planck mass m_P = sqrt(hbar*c/G) becomes known to nine figures. The ratio m_p/m_P — proton mass to Planck mass — sharpens from five to nine significant figures with no additional measurement on the electromagnetic side.

The gap between five and nine significant figures is where proposed exact relationships between m_p/m_P and alpha either are confirmed or are falsified. A correct unified theory would predict this ratio exactly as a function of the fine structure constant and other dimensionless electromagnetic parameters. Such predictions are currently untestable beyond five figures. This experiment makes them testable to nine.

Every GM product for solar system bodies simultaneously becomes a precise mass determination. The mass of the Earth, the Moon, Mars, and Jupiter — all known to nine figures immediately by dividing their known GM by the newly precise G. This is a complete remeasurement of solar system masses at no additional observational cost.

8. Cost and Comparison

The cumulative cost of Cavendish-style G determinations over the past century, across major metrology institutes in multiple countries, has been substantial. No convergence has been achieved. Three hundred years of investment has produced not precision improvement but a widening recognition that the terrestrial environment is fundamentally the wrong place to do this experiment.

The proposed mission is less technically complex than many current planetary science missions. It requires no landing, no sample return, no complex in-situ chemistry, no precise pointing at a distant astronomical target. It requires transporting a known mass to deep space, releasing a booster, and operating an LED interferometer and test mass actuator for one or more years. The payload has no moving parts except the test mass actuator. The primary instrument draws milliwatts.

The mission fits within the cost envelope of an ESA Medium-class or NASA Discovery-class science mission. The scientific return — resolving a 300-year measurement failure and opening the gravitational-electromagnetic interface to genuine precision tests — is disproportionate to the engineering investment. A formal feasibility study is the appropriate immediate next step.

9. Conclusion

G has an exact value. The universe does not have error bars. The uncertainty in G is entirely on the measurement side and has a specific identifiable cause: we have never had independent access to the mass of a body large enough to produce a measurable gravitational field. Every previous approach either uses astronomical bodies whose masses are inferred from gravity, or uses laboratory masses too small to overcome the terrestrial noise floor.

The proposed experiment resolves this by construction. A toroidal battery mass of precisely known composition is placed at 3 AU from the Sun. A test mass oscillates through its central hole under gravity alone. An LED interferometer tracks period, acceleration, and distance simultaneously through thousands of oscillations over a mission lifetime exceeding one year. The mass is known before the gravitational measurement begins. The GM degeneracy is broken.

Tidal contamination from all external sources — Sun, galaxy, planets — is verified to be between 67 million and 280 billion times smaller than the measurement signal. The deep space environment does not merely reduce noise. It eliminates it.

The result is G to nine significant figures, the Planck jacobians to nine figures, and every dimensionless ratio at the gravitational-electromagnetic interface sharpened by four to five significant figures. Proposed exact relationships between m_p/m_P and alpha become directly testable as a ratio against kinematics for the first time. No new physics is required. No new engineering principles are required. The only reason this has not been done is that it falls between the institutional mandates of metrology and deep space science. That gap should be closed.

References

[1] CODATA 2018 recommended values of the fundamental physical constants. Rev. Mod. Phys. 93, 025010 (2021).

[2] Cavendish, H. Experiments to determine the density of the Earth. Phil. Trans. R. Soc. London 88, 469-526 (1798).

[3] Gillies, G.T. The Newtonian gravitational constant: recent measurements and related studies. Rep. Prog. Phys. 60, 151 (1997).

[4] Rothleitner, C. & Schlamminger, S. Measurements of the Newtonian constant of gravitation. Rev. Sci. Instrum. 88, 111101 (2017).

[5] Quinn, T. et al. Improved determination of G using two methods. Phys. Rev. Lett. 111, 101102 (2013).

[6] Rosi, G. et al. Precision measurement of the Newtonian gravitational constant using cold atoms. Nature 510, 518-521 (2014).

[7] Armano, M. et al. Sub-Femto-g Free Fall for Space-Based Gravitational Wave Observatories: LISA Pathfinder Results. Phys. Rev. Lett. 116, 231101 (2016).

[8] Folkner, W.M. et al. The planetary and lunar ephemeris DE 430 and DE 431. Interplanet. Netw. Prog. Rep. 196, 1-81 (2014).

[9] Mohr, P.J., Newell, D.B. & Taylor, B.N. CODATA recommended values of the fundamental physical constants: 2014. Rev. Mod. Phys. 88, 035009 (2016).

[10] Duff, M.J. How fundamental are fundamental constants? Contemp. Phys. 56, 35-47 (2015).

[11] Iorio, L. Galactic tidal effects on the Oort Cloud and the outer solar system. MNRAS 443, 2523-2534 (2014).

Sunday, May 3, 2026

Which Second Does G Introduce?

J. Rogers, SE Ohio

On the Self-Referential Temporal Ambiguity of the Gravitational Constant

A Foundational Critique of Dimensional Analysis in Gravitational Physics

Abstract

Newton's gravitational constant G carries units of m³ kg⁻¹ s⁻². The s⁻² term introduces a specific time scale into the law of gravitation. However, general relativity establishes that gravity is not a force acting across a fixed time — it is a gradient of time rates. Every point in a gravitational field has its own proper time, running at a rate that depends on the local gravitational potential. This paper poses a question that has not been formally addressed in the literature: which second does G introduce? We demonstrate that this question has no well-defined answer, that G's temporal dimension is therefore physically ambiguous, and that this ambiguity is the root cause of G's notorious measurement inconsistency across experiments spanning 200 years. We further show that G/c² — which appears in the dimensionless gravitational parameter Ï„ = (G/c²)(m/r) — is free of this ambiguity, is known to GPS precision (10 significant figures), and is the only combination of G that the universe actually uses.

1. The Unit Contamination Problem

Newton's law of gravitation is standardly written as:

F = G · mM / r²

where F is force in kg·m·s⁻², m and M are masses in kg, r is distance in meters, and G = 6.674 × 10⁻¹¹ m³ kg⁻¹ s⁻². The dimensional structure reveals an immediate problem: the quantities mM/r² have units of kg²/m². They contain no time. Gravity, as a geometric relationship between masses and distances, introduces no clock.

G injects s⁻² into this equation for one reason only: Newton's second law F = ma defines force to include acceleration, which is measured against a clock. The second was already in F via kinematics. G absorbs s⁻² as a compensating factor to preserve dimensional consistency across a unit system that was never designed for gravitational physics.

The alternative is immediate. Define force geometrically:

F ≡ mM / r² [units: kg²/m²]

Then gravity is exact, unit-free in the physical sense, and contains no clock. The constant migrates:

F = ma / G ⇒ G = ma / F

G becomes the constant of inertia — the conversion factor between geometric force and kinematic response. The temporal ambiguity now lives in kinematics, where it belongs, not in the description of gravitational geometry.

2. Gravity Is a Time Gradient

General relativity does not describe gravity as a force. It describes gravity as spacetime curvature, and in the weak-field limit, this curvature is predominantly temporal. The gravitational redshift formula is:

Δf/f = ΔΦ / c² = (G/c²) · (M/r)

A clock deeper in a gravitational well runs slower. The rate difference between two clocks at different gravitational potentials is continuous, position-dependent, and exact. This is not a perturbative correction to flat-space physics — it is the physics. Gravity IS the gradient of proper time rates across space.

GPS confirms this operationally. Satellite clocks must be corrected for gravitational time dilation to maintain nanosecond synchronization. These corrections are computed using Ï„ = (G/c²)(M/r) and are accurate to 10 significant figures. GPS does not fail at the 5th significant figure despite G being known only to 5 significant figures.

This is the central empirical fact that demands explanation.

3. The Self-Referential Temporal Ambiguity

We now state the core problem precisely.

G has units m³ kg⁻¹ s⁻². The s⁻² encodes a specific time rate — a second. Every Cavendish-style measurement of G uses a clock to measure acceleration, force, or oscillation period. That clock runs at a rate determined by the local gravitational potential.

But G is supposed to describe the gravitational potential itself.

The second embedded in G is therefore evaluated inside the very field that G is meant to characterize. This is not a small systematic error. It is a logical circularity:

• To measure G you need a clock.

• Your clock rate depends on the local gravitational potential Φ.

• Φ depends on G.

• Therefore G measured anywhere depends on G at that location.

G is not a universal constant. It is a local quantity, contaminated by the gravitational potential of the measurement site, that has been treated as universal because the contamination is small enough at Earth's surface to hide within experimental uncertainty — until experiments became precise enough to disagree.

The 40-sigma disagreement between precision G measurements — experiments disagreeing by 40 times their stated error bars — is not experimental incompetence. It is the universe signaling that the quantity being measured is not well-defined.

4. Which Second? The Gradient Problem

Consider a gravitational field with potential Φ(r). The proper time rate at position r relative to a clock at infinity is:

dÏ„/dt = √(1 + 2Φ(r)/c²) ≈ 1 + Φ(r)/c² [weak field]

In a gradient, every point r has a distinct proper time rate. There is no canonical 'the second' in a gravitational field. The second at r₁ and the second at r₂ differ by:

Δ(dÏ„/dt) = Φ(r₁)/c² - Φ(r₂)/c² = (G/c²)(M/r₁ - M/r₂)

When a Cavendish experiment uses a torsion fiber with period T to extract G, T is measured in coordinate seconds at the lab's gravitational potential. When an atom interferometry experiment uses laser pulse timing to measure acceleration, those pulse intervals are proper time intervals at the apparatus's location. The two experiments embed different seconds into their extracted values of G, and neither second has been corrected to a common reference.

The question 'which second does G introduce?' therefore has the answer: whichever second existed at the location and gravitational potential of the measurement, uncorrected for the field being measured. This is not a universal second. It is a local, potential-dependent, self-referentially contaminated second.

5. G/c² Is the Clean Quantity

The combination G/c² is free of this ambiguity. To see why, note that c is also measured locally using local clocks. The local second that contaminates G also contaminates c² in the same measurement context. When you form G/c², the local temporal factor cancels:

G/c² = [m³ kg⁻¹ s⁻²] / [m² s⁻²] = m / kg

The seconds are gone. G/c² has units of meters per kilogram — a purely geometric ratio. It is the Schwarzschild radius per unit mass, the conversion factor between mass and the spatial curvature it produces.

The dimensionless gravitational parameter is then:

Ï„ = (G/c²) · (m / r) = (l_P / m_P) · (m_SI / r_SI)

where l_P = √(hG/c³) and m_P = √(hc/G) are the Planck length and mass. Crucially, l_P/m_P = G/c² exactly. The Planck quantities are not fundamental here — they are a convenient factorization that makes explicit what is happening: the SI unit standards for length and mass (l_P, m_P) are introduced and then immediately cancelled by the actual physical ratio m_SI/r_SI. What remains is pure dimensionless physics.

Ï„ is the same number regardless of where in a gravitational gradient you compute it, because G/c² carries no net temporal dependence. This is why GPS works to 10 significant figures using Ï„ while G itself is uncertain at the 5th figure.

6. The Measurement Implication

If G/c² is the physically clean quantity, the experimental program should measure G/c² directly rather than G alone. Several consequences follow:

• Atom interferometry experiments that measure gravitational acceleration a = GM/r² and laser interferometry experiments that measure r with electromagnetic precision are already measuring G/c² implicitly. The c² enters through the electromagnetic calibration of the length standard.

• Experiments that attempt to measure G in isolation — by measuring a gravitational force against a mass standard defined by the kilogram — are attempting to separate G from c² in a context where the universe has no opinion about that separation.

• The disagreement between G measurements performed by different methods may reflect genuine physical differences in the local gravitational potential of each laboratory, uncorrected for the temporal self-reference described in Section 3.

A 2026 NIST proposal to measure G via laser spectroscopy of the axion Compton frequency — connecting G to h, e, and nucleon masses — is the correct structural approach. It measures G through electromagnetic invariants, which share the same local temporal frame as c, and therefore directly accesses G/c² in the physically meaningful sense.

7. The Hume Boundary in Physics

The deeper issue is epistemological. A measurement is not a property that an object possesses. It is a ratio between an object and an arbitrary unit standard. The table does not have a length; it has a ratio to the meter. The meter is a convention, adopted in Paris in 1793, with no physical necessity.

Newton's equation F = GmM/r² embeds three independent arbitrary conventions — the meter, the kilogram, and the second — inside G. These conventions were chosen for unrelated practical reasons by humans at a specific historical moment. There is no physical reason they should combine into a clean gravitational constant. They do not.

This is Hume's is-ought distinction applied to metrology. From descriptive physical facts you cannot derive normative unit definitions. No chain of measurements proves that a mile has 5280 feet. That is a social fact, true inside the convention, meaningless outside it.

G is partly a social fact. The 6.674 × 10⁻¹¹ carries the fingerprints of 18th century French surveying decisions. Ï„ does not. Ï„ is the universe's own dimensionless statement about relativistic compactness, independent of every convention ever adopted.

8. Conclusions

We have identified a fundamental ambiguity in the gravitational constant G: its s⁻² dimensional factor encodes a specific time rate, but gravity is a gradient of time rates with no single canonical value. The second embedded in every measurement of G is local, potential-dependent, and self-referentially contaminated by the field G is meant to describe.

This ambiguity predicts exactly what is observed: G measurements disagree across experiments by far more than their stated uncertainties, and no experimental improvement has resolved the disagreement in 200 years. The quantity being measured is not well-defined.

G/c² is well-defined. Its temporal factors cancel exactly, leaving a purely geometric m/kg ratio. GPS navigation confirms G/c² to 10 significant figures without requiring G to 10 significant figures. The universe computes Ï„ = (G/c²)(m/r). It does not compute G and c² separately.

The experimental program for precision gravitational metrology should target G/c² directly through electromagnetic measurements, where the cancellation of temporal contamination is structurally guaranteed. Continuing to measure G in isolation is continuing to ask the universe a question it has no answer to.

The second in G was always the wrong second. There is no right one.

Note on Priority

The central argument of this paper — that G’s temporal dimension is self-referentially ambiguous because gravity is a time gradient — was developed in conversation and has not, to the authors’ knowledge, been stated in this form in the existing literature. The GPS precision argument for G/c² as the physically clean quantity is an empirical observation available in any precision navigation reference but whose metrological implication for G measurement has not been made explicit.

Friday, May 1, 2026

The complete operational definition of physical law in three lines.

The complete operational definition of physical law in three lines:

  1. Remove input units — cancel the arbitrary human unit standards from the measured quantities (e.g., divide by the non reduced Planck Jacobians. 

  2. Do the physics as a pure ratio — work with dimensionless ratios only. This is the only step that involves the eternal, unit‑free relationships.

  3. Decorate with output units — multiply by the appropriate Planck Jacobians (or any chosen unit standards) to express the result in human‑readable units.

Step 2 is the only physics. Steps 1 and 3 are pure accounting — converting between the dimensionless reality and our arbitrary measurement conventions.

Redefining Force Units to Expose G as a Metrological Artifact

J. Rogers, SE Ohio 

and a Proposal for High-Precision Measurement of 1/G as a Pure Inertial Constant

Abstract

The gravitational constant G does not describe a physical property of the universe. It is a conversion factor — a metrological patch — that exists because humans defined the unit of force (the Newton) in a way that is misaligned with the natural geometric structure of mass-distance interactions. This paper demonstrates that by redefining force to carry units of kg²/m², G vanishes from the law of gravitation entirely and reappears as k = 1/G inside Newton's Second Law, F = kma. In this reframing, k is a pure inertial constant with no connection to gravity as a phenomenon. We then propose a high-precision experimental design to measure k directly through a clean inertial acceleration experiment using laser interferometry — bypassing the torsion balance entirely and achieving precision that Cavendish-style gravity experiments cannot reach.

1. The Problem with the Newton

The SI unit of force — the Newton — is defined as:

1 N = 1 kg · m/s²

This definition was chosen for convenience. It makes Newton's Second Law trivially true:

F = ma

Because force is defined as mass times acceleration, F = ma contains no physical information whatsoever. It is a tautology. It is a statement about unit definitions, not about nature.

This convenience, however, creates a serious problem. When Newton wrote down the law of universal gravitation:

F = G · (m₁ m₂) / r²

the constant G had to be inserted to make the units balance. The left side has units of kg·m/s². The right side, without G, has units of kg²/m². G carries units of m³/(kg·s²) precisely to bridge this mismatch.

G is not telling us something about gravity. G is telling us that our unit system is incoherent relative to the natural geometry of mass interactions.

2. Redefining Force to Kill G in Gravity Law

Define a new unit of force such that force carries units of kg²/m². That is, force is defined as the natural product of mass-mass interaction over distance squared:

F_new ≡ m₁ m₂ / r² [units: kg²/m²]

Under this definition, the law of gravitation becomes exactly:

F_new = m₁ m₂ / r²

G has disappeared. Not because we set G = 1 by fiat, but because the force unit is now defined in the same geometric terms as the right-hand side. There is no mismatch to correct.

This is not a new physics claim. No experiment is affected. All predictions remain identical. We have changed nothing about the universe. We have only chosen a unit origin that is coherent with the natural geometry of mass interactions.

3. Where G Goes: The Inertial Constant k

Because we have redefined force, Newton's Second Law can no longer be a tautology. F_new and ma do not have the same units:

• F_new has units of kg²/m²

• ma has units of kg·m/s²

To write a second law connecting force to motion, we must introduce a constant k that carries the unit mismatch:

F_new = k · m · a

Dimensional analysis forces the value of k. Since F_new = k · F_SI, and F_SI = ma, we get:

k  ≈  kg s²/m^3

k is not a new constant. It is G, relocated. Previously G sat inside the gravity equation as a signal of metrological incoherence. Now k sits inside the inertial equation for the same reason. The physics is identical. What has changed is where the constant lives — and that change has consequences for measurement.

4. Why This Matters for Measurement

G is the least precisely measured fundamental constant in physics. After centuries of effort, it is known only to approximately 5 significant figures — far worse than constants like c (exact by definition), h (exact by definition), or e (10 significant figures).

The reason is straightforward: the Cavendish torsion balance is trying to detect an extraordinarily weak gravitational signal against a background of seismic noise, thermal drift, and mechanical vibration. The signal-to-noise ratio is brutal. Every attempt to improve precision runs into the same physical limitations of the torsion balance geometry.

In the reframed system, k is a pure inertial constant. It has nothing to do with gravity as a phenomenon. Measuring k requires:

  • A known force in the new unit system (kg²/m²)

  • A known mass

  • A precise measurement of the resulting acceleration

Acceleration measurement by laser interferometry can resolve displacements at the picometer scale. This is not the limiting factor. The question is whether we can realize a force in kg²/m² units with sufficient precision to make the experiment meaningful — and the answer is yes, as described in the following section.

5. Experimental Proposal: The Inertial Calibration Experiment

The goal is to measure k = 1/G by applying a known force in kg²/m² units to a known mass and measuring the resulting acceleration with laser interferometry. The experiment is entirely non-gravitational in character — gravity is used once, at the beginning, to define the unit of force, and then plays no further role.

5.1 Step One: Define and Realize the Unit Force

The unit force in the new system is defined to be eactly by the kg^2 and m^2 in itse definition.  The force is exactly the two masses squared divided by the meter measurements.  No uncertainty in the gravity measurement.  Just like in the past F = ma had no uncertainty. 

F_unit = M² / r² [= 1 unit of force by definition]

This is not a measurement. It is a definition. The numerical value of F_unit in new units is exactly M²/r² by construction. The precision of this step is limited only by the precision of M and r — both of which are controlled by national metrology standards to better than 1 part in 10⁸.

The physical realization of this force does involve the gravitational attraction between the spheres. But we are not measuring that attraction. We are defining it to be our unit. The Cavendish apparatus fails because it tries to measure an extremely weak signal. We avoid that entirely by simply declaring the signal to be 1.

5.2 Step Two: Transfer the Force to the Inertial Track

The gravitational attraction between the tungsten spheres is used to calibrate a force transducer — a precision electrostatic actuator or a cryogenic force balance — that can then apply the same magnitude of force to a test mass in a completely separate, clean environment.

This separation is critical. The inertial measurement environment is:

  • Seismically isolated (optical table on active dampers)

  • Temperature-controlled to millikelvin stability

  • In high vacuum (< 10⁻⁸ mbar) to eliminate air damping

  • Shielded from electromagnetic interference

The test mass m is suspended on a frictionless linear guide (magnetic levitation or superconducting bearing) so that it is free to accelerate along one axis without mechanical contact.

5.3 Step Three: Measure Acceleration by Laser Interferometry

Apply the calibrated 1-unit force F_unit to the test mass m. The test mass begins to accelerate. Track the displacement x(t) of the test mass over time using a heterodyne laser interferometer locked to an iodine-stabilized reference laser.

The interferometer resolves displacement to better than 1 pm over measurement intervals of seconds. From the displacement-time record, acceleration a is extracted by fitting to the kinematic relation:

x(t) = ½ a t²

The fit is performed over thousands of independent measurement runs. Statistical averaging suppresses random noise by √N, where N is the number of runs. Systematic errors — laser frequency drift, test mass charging, residual gas pressure — are characterized and subtracted.

5.4 Step Four: Extract k

From the known force F_unit, the known mass m, and the measured acceleration a, k is directly:

k = F_unit / (m · a)

Since F_unit = 1 by definition in the new unit system:

k = 1 / (m · a)

With m known to 1 part in 10⁸ and a measured to a precision limited by the interferometer and averaging time, the achievable precision for k — and therefore for 1/G — is expected to significantly exceed the current 5-significant-figure limit on G.

6. Error Budget

The dominant error sources and their estimated contributions are as follows:

  • Mass standard uncertainty: < 1 part in 10⁸ (traceable to BIPM kilogram definition via Kibble balance)

  • Length standard uncertainty: < 1 part in 10⁹ (traceable to iodine-stabilized laser)

  • Force transducer calibration: estimated 1-5 parts in 10⁷ (dominant term, improvable with cryogenic force balance)

  • Interferometer displacement noise: < 1 pm/√Hz (suppressed by averaging)

  • Residual gas damping: < 1 part in 10⁸ at 10⁻⁸ mbar

  • Seismic noise: suppressed by active isolation to < 1 nm RMS at 1 Hz, negligible over measurement timescale

The experiment is not limited by quantum noise, thermal noise, or any fundamental physical barrier at the target precision. It is limited by engineering — specifically by the precision of the force transducer. This is an improvable engineering problem, not a fundamental one.

7. What This Experiment Is — and Is Not

This experiment is not a gravity experiment. It does not measure the strength of gravitational attraction. Gravity appears only in the single act of defining the unit force by the geometry of two masses — and at that step, we are not measuring anything. We are defining.

Everything after that is pure inertia. We are measuring how much a known mass resists a known force. That is a metrological question about the relationship between our mass scale, our length scale, and our time scale. The answer is k = 1/G, but G as a concept plays no role in the measurement itself.

This is why the precision is achievable. The torsion balance fails because it is trying to detect gravity through noise. This experiment detects inertia — which is not weak, not noisy, and not buried under competing signals. It is the most fundamental mechanical property of matter, and we have instruments precise enough to measure it.

8. Conclusion

G is a metrological artifact. It exists in the gravitational law because humans defined force in units (kg·m/s²) that do not match the natural geometric structure of mass interactions (kg²/m²). Redefining force to carry units of kg²/m² eliminates G from the gravity law entirely and moves it — as k = 1/G — into the inertial law F = kma.

In this location, k is measurable by a clean acceleration experiment using laser interferometry. The experiment has no gravitational signal to detect, no torsion fiber to stabilize, and no seismic noise problem. It is limited only by the precision of force transducer engineering, which is an improvable problem.

The result would be the most precise measurement of G ever achieved — not by doing a better gravity experiment, but by recognizing that G was never a gravity constant to begin with.

k = 1/G ≈  10¹⁰ kg s² / m^3

Thursday, April 30, 2026

You Don’t Own the Code That AI Writes for You: A Problem of Ownership in the Age of Generative Coding

J. Rogers, SE Ohio

Abstract

The rapid adoption of generative AI coding tools has created a quiet legal crisis. Companies are replacing human developers with AI, building massive codebases from machine‑generated output, and assuming that they own the resulting intellectual property. Under current US copyright law, this assumption is largely false. This paper explains why purely AI‑generated code cannot be copyrighted, why the distinction between “AI‑generated” and “AI‑assisted” matters, and how the rush to replace human programmers is producing millions of lines of legally orphaned code. The paper concludes with practical risks and recommendations for organizations relying on AI coding tools.

1. Introduction

In 2026, a software engineer in Stockholm told the New York Times: “I probably spend more than my salary on Claude.” Across the industry, AI coding assistants are being framed as efficiency miracles. Yet a fundamental legal question has been largely ignored: Who owns the code that an AI writes for you?

The answer from the US Copyright Office, federal courts, and the Supreme Court (by refusal to hear the contrary) is clear: No human author, no copyright. If a company’s codebase contains large volumes of purely AI‑generated code, that code is effectively public domain. Competitors can copy it. Security flaws can be copied without remedy. And the company cannot sue for infringement.

This paper outlines the legal landscape, the critical distinction between “generated” and “assisted” works, and the real‑world consequences for businesses that are currently firing junior developers and replacing them with AI.

US copyright law protects “original works of authorship fixed in any tangible medium of expression.” The Supreme Court has repeatedly held that an “author” must be a human being. In Burrow‑Giles Lithographic Co. v. Sarony (1884), the Court defined author as “he to whom anything owes its origin.”

In 2023, the US Copyright Office issued policy guidance stating that it “will register an original work of authorship, provided that the work was created by a human being.” Works generated by artificial intelligence with no human creative contribution will not be registered.

Applying this to code: If you ask an AI “Write me a function to validate an email address,” and you copy‑paste the output without meaningful human modification, that function is not copyrightable. Anyone may legally copy it.

2.3 Thaler v. Perlmutter – The Supreme Court Refuses to Intervene

Dr. Stephen Thaler attempted to register a work he said was created entirely by an AI system. The Copyright Office refused. The district court and the DC Circuit affirmed. In March 2026, the Supreme Court declined to hear the appeal, letting the lower rulings stand.

The DC Circuit’s opinion is blunt: “Copyright law requires an ‘author’ – a human being.” The court noted that the Copyright Act uses terms like “children,” “grandchildren,” and “widow,” which only apply to natural persons. The outcome is settled: purely AI‑generated works have no copyright protection.

3. The “Generated” vs. “Assisted” Distinction

The Copyright Office draws a critical line:

AI‑Generated AI‑Assisted
The AI produces the expression with no human creative control over the specific form. A human uses AI as a tool, but provides sufficient creative input – editing, selecting, arranging, rewriting.
Not copyrightable. May be copyrightable (the human’s contributions are protected).

In practice, most current use of AI coding tools leans heavily toward “generated.” Engineers type a prompt, receive code, and commit it without meaningful change. This is exactly the scenario the Copyright Office describes as lacking human authorship.

The Office has explicitly warned that iterative prompting – asking the AI to refine its output – does not automatically confer authorship. Unless the human makes original, creative modifications to the AI’s output, the result remains unprotectable.

4. Why “Sweat of the Brow” Doesn’t Matter

Some argue that because they spent hours crafting prompts, they “worked hard” and should own the result. Copyright law rejected “sweat of the brow” decades ago. In Feist Publications, Inc. v. Rural Telephone Service Co. (1991), the Supreme Court held that effort alone does not create copyright; there must be original creative expression.

A perfect prompt that generates perfect code still produces a work whose expression originates from the AI, not the human. Unless the human alters that expression creatively, there is no copyright.

5. Real‑World Consequences for Companies

If a competitor copies a purely AI‑generated function from your product, you have no copyright infringement claim. The code is not yours in the eyes of the law. This undercuts the entire value proposition of proprietary software.

5.2 M&A Due Diligence Nightmare

When a company is acquired, the buyer’s lawyers will ask: “What percentage of your code was AI‑generated without human modification?” A high percentage could make the target’s “crown jewel” IP worthless. Deals will collapse or valuations will crater.

5.3 Security and Liability Traps

You cannot retroactively copyright AI‑generated code that already exists. If that code contains a vulnerability, and a competitor copies it, you have no legal recourse. Worse, if the AI reproduced code from its training set that is subject to a restrictive license (GPL, etc.), you could be sued for infringement by the original human author – while you still own none of your own output.

5.4 The “Vibe Coding” Fad Is Self‑Defeating

The current trend of “vibe coding” – describing an app idea to an AI and committing whatever it produces – is legally catastrophic. Entire startups are being built on code that belongs to no one. Investors who discover this will walk away.

6. Can You Protect AI‑Generated Code Any Other Way?

Copyright is not the only form of IP, but the alternatives are weak:

  • Trade secret – Protects confidential information, but offers no protection if the code is reverse‑engineered or independently discovered. Once AI‑generated code is distributed (e.g., in a compiled app), trade secret protection is largely lost.
  • Patent – Might cover algorithms, but most routine code does not meet the novelty and non‑obviousness requirements. AI‑generated code is unlikely to be patentable.
  • Contract – Terms of service can restrict users, but contracts don’t bind competitors who never agreed to them.

In short, there is no substitute for copyright for protecting the literal expression of software code.

7. Recommendations

For organizations using AI coding tools:

  1. Audit your codebase – Identify which files or functions were AI‑generated with minimal human modification. Flag them as unprotectable.
  2. Change workflows – Require that a human engineer meaningfully edit, rewrite, or arrange any AI‑generated output before committing. Document the creative changes.
  3. Maintain a “human authorship” log – Record who modified what, and what creative choices were made.
  4. Do not replace junior developers – Juniors are the humans who will provide the creative modifications needed for copyright. Without them, you are building an unownable codebase.
  5. Consult legal counsel – The law is evolving. The EU and other jurisdictions may take different approaches. But under current US law, the risk is real and severe.

8. Conclusion

The narrative that AI coding tools are simply “efficiency” ignores a foundational legal reality: you do not own what you do not create. When companies fire entry‑level engineers and replace them with AI, they are not just losing future senior talent – they are losing the legal ability to claim ownership over their own product.

The seed corn is being sold. The code being written today may be free for anyone to take tomorrow. And the executives celebrating quarterly stock bumps will be long gone when the lawyers arrive to ask: “Who wrote this – and do you have the papers to prove it?”

The answer, more often than not, will be no.

The Ultimate Legal Paradox: Why Colonial Land “Purchases” Can Never Be Valid

J. Rogers, SE Ohio The Core Contradiction You cannot invoke a legal system that does not yet have jurisdiction to validate a transaction t...