Skip to main content
← Back to Blog Structural Observation

When Better AI Makes Organizations Worse

Frequencies explored:

Absence Thinness
S.J. Bridger 7 min read

In February 2026, Daron Acemoglu published a paper that should concern every organization using AI for decision support. Acemoglu won the 2024 Nobel Prize in Economics for research on how institutions shape prosperity. His new paper, co-authored with researchers at MIT, proves mathematically that there is an optimal level of AI accuracy. Below that level, making AI more accurate helps. Above it, making AI more accurate triggers something the paper calls knowledge collapse.

The plain version: making your AI tools more precise can, under specific structural conditions, make your organization weaker. Not because the AI is wrong. Because it is right often enough that people stop doing the work that generates the knowledge your organization depends on to function.


The Surface Read

The AI governance market is booming. Compliance platforms and policy frameworks are multiplying. Investment in oversight infrastructure is accelerating. The working assumption across most of this activity is straightforward: the risk is that AI gets things wrong, and the solution is better controls and tighter accuracy standards.

Acemoglu’s model says the assumption misses the structural problem entirely.


The Learning Externality

The paper rests on a specific mechanism. When people do work themselves, the effort produces two things at the same time. First, a private signal: did this decision work for me? Second, a public signal that accumulates into the organization’s stock of shared knowledge. The paper calls this a learning externality (CIT-1044). The effort is expensive. But the knowledge it generates benefits everyone who comes after.

Agentic AI delivers recommendations that substitute for that human effort. The substitution is rational at the individual level. Why do the expensive work when the AI recommendation is good enough? But the substitution eliminates both signals. Fewer people doing the work means fewer signals feeding back into the collective knowledge base. And that base requires continuous replenishment. It depreciates when the replenishment stops, the way a muscle weakens when you stop using it.

The paper proves that when two conditions hold at the same time, the organization tips into a stable knowledge-collapse steady state. The conditions: human effort is elastic enough that people actually reduce their effort when AI provides good recommendations, and the AI is accurate enough that reducing effort looks rational. Both conditions must hold. One alone is insufficient. Where professional liability requirements, regulatory mandates for human review, or labor agreements keep effort inelastic, the collapse mechanism stalls. The structural question is whether those protections exist in your organization, and whether they will survive the next round of efficiency pressure.

Once the system crosses that threshold, it stays there. General knowledge has depreciated below the level where it can sustain itself. Even excellent AI recommendations cannot reverse the collapse. Nobody retains the contextual understanding to evaluate whether the AI’s recommendation applies to their specific situation.


A Steady State, Not a Crisis

Knowledge collapse is not a dramatic event. There is no alarm, no system failure that triggers an emergency response.

The knowledge depreciates below its self-sustaining threshold, and the organization settles into a new equilibrium. People still make decisions. AI still provides recommendations. Everything looks like it is working. The problem is that nobody knows enough to recognize what is missing.

Consider the water treatment operator who understood the specific behavior of the local aquifer. She did not write that knowledge down. It lived in the decisions she made every day, in the adjustments she made that no manual described. The same dynamic holds for the diagnostic radiologist who built pattern recognition across 40,000 films, or the compliance reviewer who knew which regulatory interpretations had actually been tested in court and which were theoretical. When these people leave and their replacements rely on AI-generated recommendations, the organization does not notice the loss immediately. The AI covers the gap. But the knowledge those people carried stops being replenished. It is not replaced. It depreciates.

Acemoglu formalizes this. The paper is not making an argument from nostalgia or change resistance. It is providing the mathematical proof that this depreciation produces a stable equilibrium from which recovery is structurally difficult.


The Interior Optimum

The paper’s central welfare result is non-monotone. There exists a specific level of AI accuracy that maximizes welfare. Below that level, improving accuracy helps. People still do the work because the AI is not good enough to fully substitute for their effort. The learning externality continues.

Above that level, improving accuracy hurts. The AI is right often enough that people rationally stop doing the work. The learning externality dies. General knowledge depreciates. The organization loses the capacity to evaluate whether the AI’s recommendations fit their particular context.

The paper even discusses “deliberate garbling”: intentionally reducing AI output precision as a welfare-improving intervention. That is how serious the non-monotone finding is. The structurally optimal response to AI that has crossed the accuracy threshold is, according to the model, to make it less precise on purpose.

The paper’s own prescription is macro-level: information-design regulation that caps AI precision at the welfare-maximizing level. That may come. But organizations cannot wait for regulators to determine the right accuracy ceiling for their industry. They need to know now whether their specific conditions are on the collapsing side of the threshold. No compliance platform measures this. No governance framework asks this question. The question is not whether your AI tools meet regulatory standards. It is whether the conditions that sustain your organization’s collective knowledge are still intact.


The Aggregation Finding

Amid the non-monotone results, the paper contains one finding that is unambiguously positive. Greater capacity for aggregating general knowledge raises welfare. Unconditionally. The paper’s term is monotonically: more aggregation capacity is always better.

Tools and processes that surface distributed organizational knowledge, that convert private observations into shared understanding, make the knowledge-collapse threshold harder to reach. A companion paper by Acemoglu and colleagues (CIT-1045) extends this by modeling how aggregation mechanisms that consolidate distributed signals strengthen the knowledge base against depreciation.

The distinction matters. AI accuracy has a structural ceiling beyond which it becomes harmful. Knowledge aggregation does not. There is no level of aggregation capacity that makes things worse. If you invest in one thing to protect against knowledge collapse, the paper says invest in mechanisms that aggregate what your people know.


What This Means Inside Your Organization

Acemoglu’s model operates at the macroeconomic level. The paper describes knowledge collapse as an economy-wide phenomenon. But the structural mechanics do not require economy-wide scale to operate. The learning externality operates at every level where human effort produces shared knowledge as a byproduct, from a single team to an entire business unit. When a hospital’s experienced diagnosticians retire and are replaced by AI-assisted workflows, the public signal loss happens at the unit level. The math applies wherever the mechanism operates.

This paper does not endorse any particular organizational framework or diagnostic. What it does is provide the mathematical substrate for a structural dynamic that has been observable for years: the quiet depreciation of institutional knowledge as AI adoption accelerates. The organizations that navigate this successfully will be the ones that can answer a specific question. Are the conditions that sustain our collective knowledge still operating? Or have we already crossed the threshold without noticing?

Nobody who has crossed the threshold notices. That is what makes it a steady state.


Monday Morning: The Audit

Where in your organization has AI adoption reduced the number of people doing the work that generates institutional knowledge as a byproduct? Not where AI has improved productivity. Where it has replaced the effort that was producing shared understanding.

If the most experienced person in a critical function left tomorrow, how much of what they know exists in a form anyone else can access? Not documented procedures. The accumulated judgment that comes from decades of direct engagement with the work.

What mechanisms exist in your organization to aggregate distributed knowledge across different roles and levels? When was the last time anyone confirmed those mechanisms are still functioning?

These are measurable structural conditions.

Acemoglu’s model identifies the specific conditions that determine whether AI adoption strengthens or hollows out an organization: effort elasticity, accuracy thresholds, and knowledge stock levels. Any tool that aggregates distributed knowledge helps. The paper is clear on that. The AI Verification Readiness Assessment goes further: it measures the specific structural conditions the paper identifies as collapse precursors. Twelve dimensions across four frequencies, scored and mapped for the amplification dynamics that compound knowledge-collapse risk. If the pattern described in this post is operating inside your organization, the AVRA measures how close you are to the threshold.

AI Verification Readiness Assessment →

Frequently Asked Questions

What is knowledge collapse in the context of AI?

Knowledge collapse is a term from a 2026 NBER working paper by Nobel laureate Daron Acemoglu and MIT colleagues. It describes a stable steady state in which an organization’s stock of general knowledge depreciates below the level where it can sustain itself. This happens when AI recommendations substitute for human effort, eliminating the learning externality that replenishes collective knowledge over time.

How can better AI accuracy make organizations worse?

Acemoglu’s model proves that welfare is non-monotone in AI accuracy: there exists an interior optimum. Below that level, more accuracy helps because people still do the work themselves. Above it, AI is accurate enough that people rationally stop doing the work that generates institutional knowledge as a byproduct. The learning externality dies, general knowledge depreciates, and the organization loses the capacity to evaluate whether AI recommendations apply to specific situations.

What conditions trigger knowledge collapse?

Two conditions must hold simultaneously: human effort must be elastic enough that people reduce effort when AI provides good recommendations, and AI must be accurate enough that reducing effort appears rational. Where professional liability requirements, regulatory mandates for human review, or labor agreements keep effort inelastic, the collapse mechanism stalls.

What is the difference between knowledge collapse and a knowledge crisis?

Knowledge collapse is a steady state, not a crisis. There is no alarm, no system failure, no dramatic event. The knowledge depreciates below its self-sustaining threshold, and the organization settles into a new equilibrium where people still make decisions, AI still provides recommendations, and everything appears to function normally. Nobody retains enough contextual understanding to recognize what is missing.

What does the Acemoglu paper say about knowledge aggregation?

The paper contains one unambiguously positive finding: greater capacity for aggregating general knowledge raises welfare unconditionally and monotonically. Tools that surface distributed organizational knowledge make the knowledge-collapse threshold harder to reach. Unlike AI accuracy, which has a structural ceiling beyond which it becomes harmful, knowledge aggregation has no such ceiling.


The structural analyses referenced in this post are available in the Analysis Collection. The Four Frequencies framework is described at The Four Frequencies. The diagnostic that measures these conditions for organizations is at Organizations, with a focused AI verification assessment at AI Verification Readiness. Sector-level structural data is at Structural Intelligence.

Related: The Verification Gap Nobody Owns examines the operational side of this dynamic: what happens when verification behavior atrophies as models improve.

This analysis publishes monthly. The Frequency Report goes deeper: with a structural tracker across twelve sectors, reader observations from the field, and a full four-frequency diagnostic each month.

← Back to Blog