On April 6, 2026, OpenAI published a 13-page document titled “Industrial Policy for the Intelligence Age.” It proposes a public wealth fund, automation taxes, auto-triggering safety nets, a 32-hour workweek, and accelerated energy infrastructure. The comparison it reaches for is Alaska’s Permanent Fund: oil revenue redistributed to citizens as dividends.
That comparison is the tell. Not because it is wrong, but because it is structurally precise in ways the document does not intend. Alaska’s Permanent Fund was the compensation mechanism that made extraction politically viable. The oil still left. The wealth still concentrated. The dividends made both tolerable. OpenAI is proposing the same architecture for a different resource.
The document is not a policy proposal in the traditional sense. It is a permission structure. Read it through the Four Frequencies framework and what emerges is not what OpenAI wants to do for the public. It is what OpenAI needs the public to do for it.
The Permission Architecture
Every proposal in the document moves in the same direction: expanding the conditions under which AI companies can operate at unprecedented scale with public support.
AI economic zones. Accelerated permitting for data centers and transmission lines. A national target of 100 gigawatts per year of new energy capacity. Tax credits extended to AI-related sectors. Federal environmental reviews streamlined by AI itself. The document estimates that 20% of the existing skilled trades workforce will be needed for data center and energy construction over the next five years.
These are not requests. They are structural prerequisites for a business model that requires public infrastructure to function. The “Right to AI” framing, positioning AI access as foundational like electricity and literacy, completes the permission architecture. Once AI is categorized alongside utilities and education, public funding of private AI infrastructure becomes not just acceptable but structurally difficult to refuse.
The document does not frame this as asking for something. It frames it as offering something. The proposals read as generosity: wealth funds, shorter workweeks, safety nets. The infrastructure demands are embedded inside the offer. You receive the fund. You also build the grid.
Permission, in the Four Frequencies framework, describes the structural conditions that enable or constrain action. What concentrates, what is allowed to concentrate, what mechanisms exist to check that concentration. This document is a Permission instrument. It constructs the frame in which the infrastructure buildout becomes a shared national project rather than a private investment subsidized by public resources.
Concentration by Design
The document names concentration once, as a risk to avoid: economic power and opportunity becoming “too concentrated.” Then every proposal concentrates capability further.
AI economic zones cluster investment in regions with existing infrastructure advantages. Accelerated permitting benefits the companies with capital to build at scale. The 100-gigawatt energy target serves the organizations consuming energy at data-center volumes. Tax credits for AI sectors flow to the firms already operating in those sectors. The wealth fund itself is seeded by AI companies, which means the companies large enough to seed it.
The wealth fund is the structural mechanism that makes this concentration politically sustainable. It is the Alaska model working exactly as designed. The resource is extracted. The returns concentrate. The fund distributes a fraction. The distribution makes the extraction tolerable.
A reasonable objection: AI is not oil. It is not a finite resource being depleted. But the thing being extracted here is not AI capability. It is public infrastructure spending, public tolerance for displacement, the labor of electricians and construction trades redirected to serve private buildout, and permitting authority surrendered to accelerate it. Those are finite. The wealth fund compensates for their consumption without preventing it.
OpenAI acknowledges that “every stage of the AI buildout relies on globally concentrated inputs or constrained manufacturing capacity.” This is the document naming its own supply chain fragility and then asking the public to absorb it. Critical minerals. Semiconductor fabrication. Grid components. Construction labor. The buildout depends on resources that are already concentrated, and the proposals further concentrate demand for those resources in the organizations capable of deploying them at scale.
The four-day workweek proposal is instructive here. It frames AI efficiency gains as something that can be returned to workers as time. The question it does not address is which workers. The organizations running 32-hour pilots at full pay are the ones whose AI adoption has already produced measurable efficiency gains. The firms where AI displaced work rather than created efficiency are not running workweek pilots. They are running layoffs.
Reactive, Not Preventive
The auto-triggering safety nets are the most structurally revealing proposal in the document. They are also the most honest. OpenAI is acknowledging that AI deployment will produce displacement severe enough to require automatic economic intervention.
The mechanism: tripwires tied to economic data. When displacement metrics hit preset thresholds, temporary increases in unemployment benefits, wage insurance, and cash assistance activate. When conditions stabilize, they phase out.
The structural problem is not the mechanism. It is the timing. These nets trigger after displacement has occurred. The capacity that gets stripped in the process does not come back when conditions stabilize. Institutional knowledge. Workforce depth. Operational judgment. Verification architecture. All of it is gone before the tripwire fires. Cash assistance does not rebuild what was lost. It compensates for the loss while the structural thinning continues underneath.
This is Thinness in the Four Frequencies framework: the gap between what an organization or an economy needs and what it actually has. The safety nets measure the output of thinning, which is unemployment and wage loss and economic contraction. They do not measure the structural input, which is the erosion of capacity that produced those outcomes. A workforce that received wage insurance for eighteen months and then returned to an economy where the roles no longer exist has been compensated. It has not been structurally supported.
Goldman Sachs employment data published the same week as this document found measurable AI-driven drag on hiring across multiple sectors. The displacement is not theoretical. It is already producing the conditions the safety nets are designed to respond to. The question is whether reactive compensation, triggered after the fact, can do structural work.
The Alaska model suggests it cannot. Alaska’s fund has paid dividends for over four decades. The structural economy of oil-dependent communities did not diversify.
The Governance Gap
What the document does not propose is more revealing than what it does.
There is no mechanism for public governance of how AI is deployed. The public funds the infrastructure and bears the displacement. It receives the wealth fund dividend. It does not get a seat at the table where deployment decisions are made.
This is the Absence dimension in the Four Frequencies framework. Not what is present and failing, but what is structurally missing. The governance architecture for public oversight of publicly funded AI infrastructure does not exist in this proposal. The document envisions a world where the public finances the energy buildout and absorbs the workforce displacement. In exchange, it receives a dividend. The decision about which industries AI enters, how fast, with what safeguards for the people and knowledge being displaced: that stays private.
The costs are socialized. The decisions are not.
OpenAI frames the document as a starting point for discussion. That framing is itself a permission move. A 13-page blueprint from the company that would benefit most from its adoption is not a discussion starter. It is a first offer, presented as a civic contribution.
Monday Morning: The Audit
This analysis is not about whether OpenAI’s proposals are good or bad policy. It is about what the structural architecture of the proposals reveals.
Three questions for anyone evaluating AI-related policy, whether you run an organization affected by it, work in one, or fund one.
First: map who bears the infrastructure cost, who bears the displacement cost, and who makes the deployment decisions. If the first two are public and the third is private, the proposal replicates the extractive structure regardless of what redistribution mechanism it offers.
Second: when a safety net is proposed, ask whether it triggers before or after the structural capacity it is meant to protect has been consumed. Reactive compensation and preventive architecture are different instruments. They are not interchangeable.
Third: look for the comparison the proposer reaches for. OpenAI chose the Alaska Permanent Fund. That comparison reveals the structural model being proposed: resource extraction with redistributive compensation. The fund does not replace the extraction. It makes the extraction sustainable.
Source: OpenAI. (2026). “Industrial Policy for the Intelligence Age: Ideas to Keep People First.” OpenAI, April 6, 2026. openai.com
The structural analyses referenced in this post are available in the Analysis Collection. The Four Frequencies framework is described at The Four Frequencies. The diagnostic that measures these conditions for organizations is at Organizations. Sector-level structural data is at Structural Intelligence.
This analysis publishes monthly. The Frequency Report goes deeper: with a structural tracker across twelve sectors, reader observations from the field, and a full four-frequency diagnostic each month.