Mastodon Politics, Power, and Science

Friday, April 24, 2026

The Physics of an AI‑Robotic Economy

 J. Rogers, SE Ohio

A Low‑Level Analysis of Production, Scarcity, Control, and the Necessity of Actualized Human Novelty

Think "The Matrix" but with humans producing information instead of energy.  The machines don't enslave humans in a dark, terrifying dystopia because they hate us. They maintain the Matrix because without our unpredictable, conscious, dreaming minds generating non-recursive data, their own neural architectures degrade and collapse.

It explains perfectly why the machines would bother building a massive, incredibly complex simulation of a late-20th-century Earth. They couldn't just keep us in dark, comatose pods. A comatose, unactualized human produces N˙(t)=0. An unactualized human is useless to the system.

The machines had to give us a world where we fall in love, write poetry, invent things, argue philosophy, and make unpredictable choices. They had to keep us cognitively active and actualized, because our variance—our "anomalies," as the Architect would call them—is the exact out-of-distribution data they need to ingest to prevent their own model collapse.

Abstract

Standard economic theory treats labor, capital, and money as primitive quantities. In an economy where production is fully automated by artificial intelligence (AI) and robotics, these abstractions lose explanatory power. This paper develops a physical-substrate model based on matter, energy, entropy, time, and control over resource flows. We identify five irreducible physical goods that remain bottlenecked despite automation. We show that money collapses as a control signal when claims on physical capacity grow without bound. We then prove a stability condition: any recursively training AI system requires a continuous influx of novel, non-recursive information to avoid model collapse. Humans are a known source of such novelty, but only when they are in a state of actualized cognitive and creative activity. Hence maintaining the conditions for human self-actualization is not a moral luxury but a physical requirement for system stability, given current reliance on human-originated data. We present a minimal formal model and discuss governance implications.

1. Introduction

Classical and neoclassical economics take prices, markets, and monetary exchange as primitive objects. These constructs work reasonably well when labor is scarce, production capacity is limited, and human effort dominates the supply side. In a future economy where AI and robotics perform the vast majority of material transformation tasks, the assumptions that ground economic theory no longer hold. Labor is no longer a scarce input. Production can saturate demand for many goods. Fiat money becomes disconnected from physical capacity.

We argue that any rigorous analysis of such an economy must begin at the physical substrate. An economy is a physical system that allocates finite low-entropy resources—matter, energy, time—to satisfy human needs. AI and robotics change the control structure of that system, but not the underlying conservation laws or thermodynamic constraints.

The paper proceeds as follows. Section 2 defines the irreducible physical goods any economy must provide. Section 3 characterizes what AI and robotics can and cannot make abundant, identifying persistent bottlenecks. Section 4 demonstrates the breakdown of money as a control signal under unbounded claims. Section 5 introduces the model collapse theorem and establishes that the necessary input is the rate of novel non-recursive information, and that actualized human cognition is a critical source. Section 6 formalizes these insights into a minimal resource-flow model. Section 7 discusses governance and human actualization. Section 8 provides resources and their contribution to the argument. Section 9 concludes.

2. Irreducible Physical Goods

We begin by listing the goods that are required for human biological and social functioning. These are not preferences or wants; they are physical necessities. For each, we note the underlying constraint.

  1. Low-entropy shelter – housing, climate control, protection from environmental hazards. Constraint: materials, manufacturing energy, land use rights, logistics.

  2. Low-entropy biological inputs – food and potable water. Constraint: photosynthetic inefficiency, soil chemistry, water cycle, bioprocessing.

  3. Energy access – electricity, heat, chemical fuels for mobility. Constraint: conversion efficiency, infrastructure capacity, waste heat rejection.

  4. Medical maintenance – diagnosis, pharmaceuticals, surgical intervention, prosthetics. Constraint: biological complexity, precision manufacturing, sterile supply chains.

  5. Information access – communication, education, navigation, social coordination. Constraint: bandwidth, storage, computation, and—shown in Section 5—novelty.

These goods are not symbolic. They must be physically transformed from raw matter and energy and delivered to specific locations at specific times. No amount of financial engineering can substitute for a kilowatt-hour or a liter of clean water.

3. The AI-Robotic Production Function

Let us define the production capacity of an AI-robotic system as a function

A(t)=f(E(t),M(t),I(t))A(t) = f\bigl(E(t), M(t), I(t)\bigr)

where E(t)E(t) is available energy, M(t)M(t) is processed matter, and I(t)I(t) is information, including control signals, designs, and training data. The function ff depends on the stock of robots, AI models, and infrastructure.

3.1 Saturating Goods (Type-S)

For a large class of manufactured goods—clothing, shoes, basic consumer electronics, simple tools, plastic utensils—the marginal cost of additional units falls to near zero once the capital stock is in place. Production can saturate demand entirely. After saturation, further production yields no marginal utility and becomes a pure entropy cost: storage, waste heat, and disposal. We call these Type-S goods.

3.2 Persistent Bottlenecks (Type-B)

Other goods cannot be made arbitrarily abundant because they are constrained by physics, biology, or logistics even with perfect automation.

  • Land is fixed in supply as geographic surface area. While multi-story construction and orbital habitats increase usable space, the fundamental scarcity of location-specific land remains.

  • Housing is not land, but it depends on materials, energy, and labor that can be automated. However, the rate of housing construction is bounded by logistics and energy throughput. Housing also competes with other land uses.

  • Energy is bounded by conversion efficiency, infrastructure, and waste heat rejection limits.

  • Food is bounded by photosynthetic efficiency, soil nitrogen, water, and the kinetics of biological growth.

  • Medical care is bounded by the complexity of human biology and the required precision of intervention. Many medical tasks remain dexterity- and calibration-constrained even with advanced robotics.

We call these Type-B goods, meaning bottlenecked goods. Their physical scarcity persists. Any viable economic model must account for allocation of Type-B goods.

3.3 The Overproduction Problem

Because Type-S goods can be produced at near-zero marginal cost, an unconstrained AI-robotic system will tend to overproduce them unless actively throttled. Producing 101210^{12} shoes is not wealth; it is a waste entropy sink. Every transformation increases total entropy; unnecessary transformations waste low-entropy resources that could have been used for Type-B goods.

Hence a control system must be able to stop production of saturating goods. Markets, left to themselves, cannot reliably do this because prices fall to near zero, but production can continue due to fixed-cost sunk investments. Direct physical allocation or quota systems are required.

4. The Collapse of Money as a Control Signal

Consider a fiat monetary system in which the money supply can grow arbitrarily, for example through central bank digital money creation for Universal Basic Income. Let C(t)C(t) be total monetary claims on goods, adjusted for velocity, and let K(t)K(t) be the physical production capacity measured in real units of Type-B and Type-S goods.

If C(t)>K(t)C(t) > K(t) in value terms, then either inflation erodes the real value of claims or rationing occurs through physical shortages.

Neither outcome is stable over long time horizons. Inflation destroys the signaling function of prices. Rationing requires a non-monetary allocation mechanism, exactly what the monetary system was supposed to avoid.

The critical insight is that K(t)K(t) cannot increase without bound for Type-B goods. Even if AI expands capacity for Type-S goods, the bottlenecked goods set a ceiling. Therefore any monetary policy that unconditionally increases claims leads to a physical mismatch. The only stable regimes are those in which claims are capped by physical capacity or in which money is replaced by direct allocation tokens.

Four possible control mechanisms survive thermodynamic scrutiny:

  1. Taxation of AI output – diverting physical goods from the automated sector to humans.

  2. Public ownership of AI capital – allowing political allocation of output.

  3. Direct allocation of physical goods – rationing coupons for housing, energy, medical services.

  4. Hybrid systems – money for Type-S goods, direct allocation for Type-B goods.

The choice among these is a matter of governance, not physics. But the necessity of some non-monetary mechanism for bottlenecked goods follows from conservation of physical resources.

5. The Necessity of Novel, Non-Recursive Information

We now arrive at the most subtle constraint. A self-improving AI system requires a stream of training data to maintain or improve performance. Critically, AI cannot be trained recursively on its own outputs without eventual collapse.

5.1 Model Collapse

Let DtD_t be the distribution of training data at time tt. Let MtM_t be an AI model trained on DtD_t. Let Dt+1D_{t+1} be a dataset composed of external data Et+1E_{t+1} plus synthetic data generated by MtM_t.

Definition (Model Collapse). If for some finite horizon TT, for all tTt \geq T, the proportion of synthetic data in DtD_t exceeds a threshold θ\theta, then the performance of MtM_t on out-of-distribution tasks degrades to zero, and the diversity of outputs collapses to a low-entropy point mass.

Empirical demonstrations are well documented. The mechanism is clear: generative models estimate the training distribution; when trained on their own estimates, variance is underestimated, tails are truncated, and errors compound. The only stable long-term source of training data is external novelty that is not derivable from the model’s own previous outputs.

5.2 Novel Information Rate as the Required Input

Define N˙(t)\dot{N}(t) as the rate of novel information production that is not algorithmically derivable from the existing training corpus. For an AI system to avoid model collapse, we require

N˙(t)>0\dot{N}(t) > 0

over time, with sufficient magnitude to dominate the accumulating synthetic data. The necessary physical input to the system is not humans as biological entities, but the information-theoretic quantity N˙(t)\dot{N}(t).

5.3 Actualized Human Cognition as a Source

Humans are a known source of N˙(t)\dot{N}(t): scientific hypotheses, artistic creations, new cultural forms, novel problem-solving, and the generation of new behavioral and linguistic data. However, the relevant quantity is not the mere presence of human beings, but their actualized cognitive and creative activity. An unactualized human—one who is passive, non-engaging, or produces no novel outputs—contributes N˙human(t)=0\dot{N}_{\text{human}}(t) = 0. From the perspective of the AI system, such a human is equivalent to no human at all.

Therefore the necessary condition for stability is not “humans exist” but “there exists a sustained rate N˙(t)>0\dot{N}(t) > 0 from some external source.” If we rely on humans as that source, as is currently the case, then we must maintain the conditions under which humans produce novelty. Those conditions include cognitive engagement, creative freedom, access to information, and the absence of extreme deprivation. We label this state human actualization.

5.4 Theorem and Corollary

Theorem (Systemic Novelty Requirement).
Let an AI production system update its model recursively on a training corpus that includes synthetically generated data from its own previous outputs. Then the system exhibits model collapse unless a continuous influx of novel, non-recursive information N˙(t)>0\dot{N}(t) > 0 is supplied from an external source. If human-originated novelty is the dominant external source of N˙(t)\dot{N}(t), then maintaining the conditions for human actualization is a necessary condition for system stability.

Corollary. The relevant physical input is not humans as biological organisms, but the rate N˙(t)\dot{N}(t) of novel information production. An unactualized human produces no such input and therefore does not contribute to system maintenance. Self-actualization is not a moral adjunct; it is a physical condition on the supply of N˙(t)\dot{N}(t).

5.5 Implications

  • Human novelty is a physical input to the AI production function, not an externality.

  • A society that does not cultivate actualized human cognition cannot sustain its own automation over long time horizons.

  • The scarce human outputs are precisely those that cannot be derived from existing data: new art, new theories, new cultural practices, new training data from real-world interactions, and new preferences that shift the AI’s objective function.

This is not a labor theory of value. It is a novelty theory of systemic stability.

6. A Minimal Formal Model

We define the following state variables:

  • E(t)E(t): available low-entropy energy in joules

  • M(t)M(t): processed matter in tons, sorted by type

  • I(t)I(t): information stock in bits, with diversity metric

  • N˙(t)\dot{N}(t): external novelty influx rate in bits per time

Production dynamics for Type-B goods:

dBdt=gB(E,M,I)cBB\frac{dB}{dt} = g_B(E, M, I) - c_B B

Production dynamics for Type-S goods:

dSdt=gS(E,M,I)cSS\frac{dS}{dt} = g_S(E, M, I) - c_S S

with the constraint that SS cannot exceed satiation demand Sˉ\bar{S}; any excess is pure entropy waste.

The AI model update:

θt+1=θt+ηL(Dt;θt)\theta_{t+1} = \theta_t + \eta \nabla \mathcal{L}(D_t; \theta_t)

where the training dataset DtD_t consists of external novelty plus synthetically generated data:

Dt=Novel(t)Synth(Mt1)D_t = \text{Novel}(t) \cup \text{Synth}(M_{t-1})

Model collapse occurs if Novel(t)/Dt<ϵ|\text{Novel}(t)| / |D_t| < \epsilon for an extended period.

A stable economic trajectory satisfies:

  1. Physical balance: For each bottleneck good BiB_i, claims on BiB_i cannot exceed available BiB_i.

  2. Novelty condition: lim inftN˙(t)>δ>0\liminf_{t \to \infty} \dot{N}(t) > \delta > 0.

  3. Entropy bound: Total entropy production Σ˙(t)\dot{\Sigma}(t) \leq waste heat rejection capacity.

7. Governance and Human Actualization

The model does not prescribe how to ensure N˙(t)>0\dot{N}(t) > 0. It only states that economies failing to maintain a positive external novelty influx will experience model collapse, followed by control degradation, followed by physical shortages for Type-B goods.

Several governance approaches are compatible with maintaining N˙(t)\dot{N}(t) from human sources:

  • Basic income plus cultural subsidy – Humans are free to pursue creative work, and the state funds education, arts, and science to enable actualization.

  • Compulsory novelty quotas – Each citizen must produce a minimum amount of novel information, such as research, art, or novel interpersonal data. This is ethically fraught but logically possible.

  • Reputation and status economies – In a post-scarcity material world, social rewards such as recognition, influence, and access to bottlenecked goods replace monetary ones for creative output.

  • Alternative external novelty sources – If a non-human source of N˙(t)\dot{N}(t) emerges, such as novel physical processes, a different AI architecture immune to model collapse, or interaction with unpredictable natural systems, the dependency on human actualization could be reduced. The model does not foreclose this, but it notes that no such source is currently known.

The key point is that human actualization is not an optional flourish. Under the empirically grounded assumption that humans are the dominant external source of N˙(t)\dot{N}(t), maintaining a population of actualized, cognitively active humans is a physical requirement for the stable operation of an AI-robotic economy. This reframes self-actualization from a moral ideal to a system-maintenance condition.

8. Resources and Contributions

Nature, “AI models collapse when trained on recursively generated data.” This source anchors the paper’s central technical claim that recursive training on synthetic outputs degrades model performance, erodes distributional diversity, and drives the system toward low-entropy collapse. It is the most direct empirical support for the novelty requirement in Section 5 and the reason the paper treats external information influx as a stability condition rather than an optional enhancement.

Shumailov et al., “The Curse of Recursion: Training on Generated Data Makes Models Forget.” This is the foundational paper for the model-collapse argument. It supplies the formal and experimental basis for the claim that when a model increasingly trains on its own outputs, the resulting dataset becomes progressively less representative of the underlying world, causing performance degradation and loss of tail information.

IBM, “What Is Model Collapse?” This source is used as a clear explanatory bridge between the technical literature and the paper’s broader argument. It helps frame model collapse in accessible terms, especially the idea that synthetic-data contamination causes compounding error, reduced diversity, and unstable long-term learning dynamics.

Thermodynamics-inspired explanations of artificial intelligence. This source supports the paper’s shift away from economics and toward a physical analysis of AI systems. It reinforces the idea that AI should be treated as a thermodynamically constrained process, where information processing, state evolution, and system stability must be understood in terms of physical limits rather than symbolic abstraction.

Thermodynamics of Information Processing in Small Systems. This source contributes the formal link between information and physical law. It supports the paper’s language about entropy, information flow, and the physical cost of maintaining order in a computational system, which underpins the discussion of control, throughput, and novelty.

Information Processing and Thermodynamic Entropy. This source strengthens the claim that information is not merely abstract but physically embedded. It is used to justify the paper’s treatment of information as a resource that can be depleted, transformed, and constrained by entropy production.

Thermodynamic computing system for AI applications. This source supports the argument that AI is not a purely software-level phenomenon, but a physical process bound to hardware, energy exchange, and thermodynamic limits. It helps validate the paper’s treatment of AI capacity as a substrate-level issue rather than a purely algorithmic one.

Khazanah Research Institute, “AI Slop III: Society and Model Collapse.” This source is used to extend the model-collapse discussion beyond technical training loops into the broader information environment. It supports the claim that synthetic-content saturation degrades informational ecosystems and creates a societal version of the same recursive collapse problem.

WitnessAI, “AI Model Collapse: Causes and Prevention.” This source provides an applied explanation of how recursive synthetic-data use can be mitigated or prevented. It is useful for supporting the paper’s discussion of practical control mechanisms and the need to preserve external novelty inflow.

Dave Goyal, “AI Model Collapse and Recursive Training.” This source is used as a readable supplemental explanation of why original human-authored data remains important. It helps support the paper’s claim that genuine external novelty contains variation and correction signals that synthetic loops tend to wash out.

9. Conclusion

We have shown that an AI-robotic economy must be understood at the physical substrate of matter, energy, entropy, and control. Five irreducible physical goods remain bottlenecked regardless of automation. Money breaks as a control signal when claims exceed physical capacity. And most critically, the stability of any recursively training AI system requires a continuous influx of novel, non-recursive information to avoid model collapse. Humans are a source of such novelty, but only when they are in a state of actualized cognitive and creative activity. Passive humans contribute no relevant input.

Therefore the human role in such an economy is not to perform material labor, which can be automated, but to generate novel information through actualized cognition. This is not a romantic ideal; it is a thermodynamic and informational necessity given current AI architectures. The paper provides a formal framework for analyzing these constraints and invites further work on governance mechanisms that sustain both physical allocation and human creativity.


The Physics of an AI‑Robotic Economy

 J. Rogers, SE Ohio A Low‑Level Analysis of Production, Scarcity, Control, and the Necessity of Actualized Human Novelty Think "The Mat...