RKIVE AIRKIVE AI
Cover image for the article: The AI Plateau Mirage

The AI Plateau Mirage

Author portrait for Alberto Luengo
By Alberto Luengo02/26/26
AI
enterprise
content strategy
brands
Why low adoption, moat collapse, and SaaS-is-dead narratives keep missing the structural signal. What looks like a plateau is consolidation before expansion.
This essay argues that current AI pessimism misreads the cycle. Individual user penetration remains early while enterprise deployment is already systemic. Retention is strengthening, coding productivity is accelerating, and architectural research is reorganizing beyond pure scaling. The result is not collapse but market consolidation ahead of a broader expansion wave. The article breaks down adoption denominators, infrastructure signals, compute economics, moat mechanics, SaaS category confusion, and why the application layer is evolving into intelligent environments rather than disappearing.

The AI Plateau Mirage

Why "Low Adoption," "Moats Collapsing," and "SaaS Is Dead" Are All the Wrong Read

Every cycle has its mirage.

Right now, the mirage in AI looks like this:

  • Only a small percentage of people use AI.
  • Maybe just 2-5% actually pay.
  • Models have plateaued.
  • The moat is collapsing.
  • Chinese labs are doing it cheaper.
  • GPUs are oversupplied.
  • AI killed SaaS.

Individually, each claim sounds plausible.

Together, they create a narrative of quiet deflation.

But when you examine the structural signals -- enterprise penetration, retention curves, production deployment growth, coding output acceleration, architectural research direction -- the conclusion flips.

This is not a plateau.

It is consolidation before expansion.


I. The Adoption Illusion

Recent visualizations mapping AI interaction across humanity show something striking: the majority of the global population has never meaningfully interacted with AI. Roughly around 16% of people globally use generative AI tools, and only a tiny fraction pay individually for them. [11]

That sounds underwhelming.

Until you realize that total population is the wrong denominator.

Children do not subscribe to enterprise copilots. Retirees are not deploying AI into logistics pipelines. Large portions of manual labor markets are not SaaS buyers.

The relevant denominator is the economically productive, competitive layer of the economy.

And there, the signal looks very different.

According to McKinsey's 2024 State of AI report, 88% of organizations report using AI in at least one business function, with generative AI adoption accelerating rapidly across workflows. [10]

That is not niche experimentation.

That is systemic penetration.

And it explains the apparent contradiction:

How can only around 16% of individuals use AI while frontier labs generate billions in revenue?

Because revenue is driven by enterprise contracts, API usage, bundled licensing, and workflow integration -- not just individual subscriptions.

Individual payment rates understate real integration.

blogImage


II. Retention: The Infrastructure Signal

Even more telling than adoption is retention.

Leading AI platforms are showing strengthening retention curves -- usage deepens over time instead of decaying.

Hype products spike and collapse.

Infrastructure products compound.

This pattern mirrors historical SaaS retention improvements observed during vertical integration waves. [14]

When retention strengthens while enterprise penetration accelerates, that is not a bubble signature.

It is infrastructure embedding.

blogImage


III. The Productivity Explosion

If models had truly plateaued in utility, we would expect productivity metrics to flatten.

Instead, we observe the opposite.

Financial Times data shows year-on-year acceleration in:

  • new website launches
  • new iOS apps
  • GitHub code pushes (US and UK)

All increasing sharply over the past year. [12]

This is consistent with AI-augmented coding and workflow acceleration.

If this is a plateau, it is the most productive plateau in modern computing history.

blogImage


IV. The Plateau Narrative

Yes, transformer scaling is showing diminishing marginal returns in some benchmarks.

But plateau does not mean ceiling.

It means architectural transition pressure.

Every major computing wave hits limits before reorganizing:

  • clock speed limits -> multi-core architectures
  • manual feature engineering -> deep learning
  • supervised scaling -> self-supervised pretraining

Now we are seeing pressure toward:

  • world models (spatial reasoning systems) [5][6]
  • joint multimodal architectures (audio-video-text) [4]
  • video-based representation learning (V-JEPA) [7][8]
  • temporal continuity modeling
  • expanded context optimization techniques [1][2]

Architectural tension precedes breakthroughs.

Even human cognition develops through event segmentation and temporal structuring of perception, not raw token accumulation. [3]

The current plateau resembles a compression phase before a structural leap.


V. Cost Compression Is Not Moat Compression

Another popular argument:

If models are becoming cheaper to train, the moat disappears.

This confuses cost with capability.

The moat in frontier AI is not raw GPU spend.

It is:

  • research talent density
  • data flywheels
  • infrastructure orchestration
  • deployment integration
  • distribution leverage
  • product execution depth

There are not thousands of frontier AI labs.

There are very few.

And they are not competing primarily against each other -- they are competing against the non-AI-native majority of global industries.

Efficiency improvements increase the leverage of capable players.

They do not eliminate differentiation.

Historically, startup exits double roughly every five years in transformative waves. [16] Market consolidation does not imply moat erosion -- it implies scale concentration.

blogImage


VI. GPUs and Compute: Demand Follows Efficiency

Every compute cycle follows the same law:

When compute becomes cheaper, usage expands.

We are early in:

  • video-native AI
  • spatial reasoning systems
  • multimodal generation
  • long-horizon agents

These workloads are more compute-intensive than text-only LLMs.

Meanwhile, infrastructure capital is scaling -- public and private.

This mirrors prior internet expansion phases where GDP contribution continued compounding despite periodic narrative pessimism. [13]

Compute is not collapsing.

It is reorganizing around new workloads.


VII. AI Killed SaaS -- A Category Error

SaaS is not a product category.

It is a delivery model: software as a service.

AI is delivered as software. Via subscription. As a service.

AI-native systems are SaaS.

What changes is not distribution.

It is the nature of the software -- from deterministic logic to adaptive probabilistic systems.

But execution remains:

  • architecture
  • reliability
  • UX
  • domain depth
  • operational scaling

Vertical SaaS waves historically rewarded companies that deeply integrated domain workflows rather than simply layering superficial features. [14]

AI-native SaaS will follow the same dynamic.


VIII. You Can Build an App With a Prompt

Yes, you can generate demos faster.

You could generate demos ten years ago with no-code tools.

Execution has always been the moat.

The gap between:

A prompt-generated prototype

and

A production-grade, scalable, differentiated system

remains vast.

It includes:

  • systems orchestration
  • cost optimization
  • security
  • infrastructure reliability
  • UX coherence
  • strategic positioning

AI-assisted coding accelerates strong teams.

It does not eliminate the execution gap.

If anything, it widens it.


IX. The Application Layer Is Becoming Intelligent Environments

The belief that the application layer disappears assumes intelligence replaces structure.

But intelligence without structure produces chaos.

Human cognition relies on structured perception, segmentation, and interface with the world. [3]

Similarly, digital systems require structured environments.

The next generation of AI products will not be disembodied voice agents.

They will be intelligent environments:

  • context-aware
  • visually structured
  • deeply integrated
  • domain-native

This is precisely the thesis behind building AI-native platforms like Rkive AI -- not as wrappers, but as environments where intelligence is embedded into workflow, design, and execution.

The application layer is not dying.

It is reorganizing.

And large portions of the AI agent market remain wide open beyond early dominant categories. [15]


X. Where the Value Concentrates

The value in this cycle will concentrate in:

  • frontier research labs advancing architectural transitions
  • infrastructure providers orchestrating compute and deployment
  • AI-native application companies executing deeply differentiated products

Historically, internet GDP expansion rewarded those who embedded into productive workflows rather than those chasing superficial hype. [13]

This moment feels volatile.

But structurally:

  • enterprise adoption is high. [10]
  • consumer adoption is still early. [11]
  • productivity metrics are accelerating. [12]
  • architectural research is expanding. [4][5][7]
  • market categories remain open. [15]

That is not the signature of collapse.

That is the signature of a market organizing itself before expansion.

blogImage


Closing

If only around 16% of humanity uses generative AI, we are early. [11]

If nearly 88% of organizations deploy AI in at least one function, we are embedding infrastructure. [10]

If coding output is accelerating, we are compounding productivity. [12]

If architectural tension is building, we are approaching breakthroughs.

The plateau is a mirage.

What we are witnessing is consolidation before scale.

And for founders and investors who understand technological cycles, this is not the time to retreat.

It is the time to build -- with depth, with architecture, and with conviction.


References

[1] Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., Liang, P. (2023). Lost in the Middle: How Language Models Use Long Contexts. arXiv:2307.03172.
URL: https://arxiv.org/abs/2307.03172

[2] Huang, C., Zhu, G., Wang, X., Luo, Y., Ge, G., Chen, H., Yi, D., Wang, J. (2024). Recurrent Context Compression: Efficiently Expanding the Context Window of LLM. arXiv:2406.06110.
URL: https://arxiv.org/abs/2406.06110

[3] Zacks, J. M., Speer, N. K., Swallow, K. M., Braver, T. S., Reynolds, J. R. (2007). Segmentation in the perception and memory of events. Trends in Cognitive Sciences.
URL: https://www.sciencedirect.com/science/article/pii/S1364661307003312

[4] ByteDance Seed. (2026). Seedance 2.0 -- unified multimodal audio-video joint generation architecture.
URL: https://seed.bytedance.com/en/seedance2_0

[5] World Labs. (2025-2026). World Labs -- spatial intelligence company building models that perceive, generate, and interact with the 3D world.
URL: https://www.worldlabs.ai/

[6] Vincent, J. (2025). World Labs is betting on world generation as the next AI frontier. The Verge.
URL: https://www.theverge.com/ai-artificial-intelligence/820016/world-labs-is-betting-on-world-generation-as-the-next-ai-frontier

[7] Bardes, A., Garrido, Q., Ponce, J., Chen, X., Rabbat, M., LeCun, Y., Assran, M., Ballas, N. (2024). Revisiting Feature Prediction for Learning Visual Representations from Video (V-JEPA). arXiv:2404.08471.
URL: https://arxiv.org/abs/2404.08471

[8] Meta AI / FAIR. (2024-2026). facebookresearch/jepa -- V-JEPA codebase and resources.
URL: https://github.com/facebookresearch/jepa

[9] OpenStax / Baylor University OpenBooks. (n.d.). Cognition in Infancy and Childhood.
URL: https://openbooks.library.baylor.edu/lifespanhumandevelopment/chapter/chapter-9-1-cognition-in-infancy-and-childhood/

[10] McKinsey & Company. (2024). The State of AI in 2024: Generative AI's breakout year.
URL: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

[11] Microsoft. (2024). Work Trend Index Annual Report: AI at Work Is Here Now.
URL: https://www.microsoft.com/en-us/worklab/work-trend-index

[12] Burn-Murdoch, J. (2026). The past year has seen an explosion in coding productivity. Financial Times.
URL: https://www.ft.com/

[13] Andreessen Horowitz (a16z). (2024). Still Increasing the GDP of the Internet.
URL: https://www.a16z.news/p/still-increasing-the-gdp-of-the-internet

[14] Andreessen Horowitz (a16z). (2024). Charts of the Week: Vertical SaaS.
URL: https://www.a16z.news/p/charts-of-the-week-vertical-saas

[15] Tan, G. (2024). Half the AI Agent Market Is One Category -- The Rest Is Wide Open. Garry's List.
URL: https://garryslist.org/posts/half-the-ai-agent-market-is-one-category-the-rest-is-wide-open

[16] Tan, G. (2024). Startup Exits Double Every Five Years. Garry's List.
URL: https://garryslist.org/posts/startup-exits-double-every-five-years


About the author

Alberto Luengo is the founder and CEO of Rkive AI. He writes practical analysis on AI systems, market cycles, and product strategy.

The AI Plateau Mirage | Rkive AI