The Work Was Always There

Every generation gets its doomsday technology.

The power loom was going to starve the weavers. Electricity was going to render factories obsolete overnight. The internet was going to hollow out every brick-and-mortar business on earth. And now, if you read the latest viral macro fiction, large language models are going to vaporize white-collar work, crash the S&P 38%, and leave us all wandering through a wasteland of "Ghost GDP."

I run a data engineering and AI consultancy. My team ships production systems that use these models every day. And from where I sit, the doom narrative is not just wrong. It is lazy. It mistakes the compression of one type of labor for the elimination of all labor. It confuses efficiency with extinction. And it ignores the most important variable in the entire equation: the staggering volume of valuable work that is currently being done badly, done slowly, or not done at all.

That is where the real story is.

We have seen this movie before

The Citrini Research report, “The 2028 Global Intelligence Crisis,” landed in February 2026 and immediately moved markets. The premise: agentic AI rapidly displaces white-collar workers, consumer spending collapses, the mortgage market buckles, and by mid-2028 we are staring down 10.2% unemployment and a full-blown deflationary spiral. Citrini calls AI an "economic pandemic" and coins the term "Ghost GDP" to describe output that surges on paper while wages crumble in reality.

It is a well-constructed thought experiment. Citrini themselves labeled it as exactly that: a "big what-if scenario," not a prediction. But the scenario went viral because it activates the same fear that every major technological transition triggers.

And here is the thing about that fear: it has been wrong every single time.

The Industrial Revolution is the most instructive parallel. Between 1780 and 1840, Britain experienced what economic historian Robert Allen named the "Engels's Pause," after Friedrich Engels, who documented in real time how factory output was soaring while worker wages flatlined. The pain was real. The displacement was real. The social disruption was brutal and well-documented.

But what happened next? The railroad boom, the expansion of consumer goods, the creation of entirely new industries and job categories that would have been unrecognizable to an 18th-century weaver. Real wages eventually doubled. The labor force did not shrink. It transformed.

The internet followed the same pattern on a compressed timeline. In the late 1990s and early 2000s, the prevailing narrative was that e-commerce would annihilate retail, automated trading would eliminate finance jobs, and digital media would kill every content creator who was not willing to work for free. What actually happened? The internet created 17.6 million direct jobs and another 10.6 million indirect ones. Entirely new fields emerged: web development, digital marketing, UX design, data science, cloud infrastructure, cybersecurity. The Bureau of Labor Statistics literally had to invent new job classifications to keep up.

Citadel Securities made this point in their rebuttal to Citrini: "Technological diffusion has historically followed an S-curve, where early adoption is slow, accelerates as costs fall, and eventually plateaus as saturation sets in and marginal returns diminish." Citrini's error, Citadel argued, is conflating recursive technology with recursive economic adoption. Just because AI can write code to improve itself does not mean its integration into the economy compounds infinitely and instantaneously. There are physical constraints: energy, compute, organizational inertia, regulation, and the simple fact that if the marginal cost of compute rises above the marginal cost of human labor for a given task, substitution does not occur.

The Indeed job posting data from early 2026 backs this up. Demand for software engineers is up 11% year over year. New business formation in the U.S. is expanding rapidly. The construction of AI data centers is driving a localized boom in physical labor markets. The St. Louis Fed's analysis of the Real-Time Population Survey shows that daily use of generative AI for work is remaining "unexpectedly stable" and currently "presents little evidence of any imminent displacement risk."

None of this means the transition will be painless. It will not be. The Engels' Pause was real, and we may be entering our own version of it. Bank of America analysts have noted a similar dynamic: productivity metrics climbing while wage growth stalls. The gap between output and compensation is a legitimate concern, and it deserves serious policy attention.

But a gap between output and wages is not the same thing as an economic apocalypse. It is a distribution problem. And distribution problems have solutions.

How LLMs actually work (and why that matters for this debate)

The doom narrative depends on a specific assumption: that AI capabilities compound exponentially and without limit. To evaluate that assumption, you have to understand what these systems actually are and what they actually do.

A large language model is, at its core, a next-token prediction engine built on the transformer architecture. It processes text as sequences of tokens, applies attention mechanisms to weigh the relevance of every token against every other token in the sequence, and generates output by predicting the most probable next token given all prior context. The model does not "think." It does not "understand" in the way humans understand. It performs extraordinarily sophisticated pattern matching across billions of parameters trained on trillions of tokens of text.

Scaling laws, first formalized by researchers at OpenAI in 2020 and since refined extensively, show that model performance improves predictably as you increase three variables: the number of parameters, the volume of training data, and the amount of compute used during training. But "predictably" is the key word. These are logarithmic improvements, not exponential ones. You need roughly 10x more compute to get a meaningful step-function improvement in capability.

And here is where physical reality intervenes. Training runs for frontier models already cost hundreds of millions of dollars and consume megawatts of power. The "densing law" identified by researchers at Peking University shows that capability density (performance per parameter) doubles approximately every 3.5 months, meaning smaller models are getting smarter faster. An 8-billion-parameter model in 2026 can match what a 70-billion-parameter model could do in 2024. That is remarkable progress. But it is not the recursive intelligence explosion that the doom narrative requires.

The real innovation in 2025 and 2026 is not bigger models. It is smarter inference. Test-time compute, where models spend more resources generating chains of reasoning before producing a final answer, has opened up new capability frontiers. Hybrid architectures that blend transformer layers with state-space models and mixture-of-experts routing are making models faster and more efficient. The direction of the field is toward doing more with less, not toward an unbounded intelligence singularity.

This matters for the economic debate because the practical capability of these systems is bounded. They are extraordinarily good at specific categories of tasks: summarization, translation, code generation, pattern recognition across large datasets, drafting, classification, and structured reasoning. They are not good at novel strategic thinking, relationship building, creative problem-solving in ambiguous domains, or navigating the messy, interpersonal dynamics that define most professional work.

The displacement risk is real for narrow, repetitive cognitive tasks. The creation opportunity is enormous for everything else.

The uncapped value problem

Here is the argument the doom narrative completely ignores.

I have spent the past decade working with organizations across industries: financial services, healthcare, media, retail, logistics. And the single most consistent observation I can make is this: the volume of important work that is not getting done is staggering.

Data quality audits that never happen because nobody has the bandwidth. Compliance reviews that get rubber-stamped because the team is underwater. Customer communications that go out generic and impersonal because personalization at scale was never feasible. Analytics dashboards that sit unused because the data pipelines feeding them are unreliable. Strategic initiatives that stall in committee because nobody has time to do the research that would move the conversation forward.

This is not hypothetical. Every professional services firm, every enterprise, every mid-market company I have worked with has a backlog of high-value work that is either being done poorly or not being done at all. Not because people are lazy. Because there are not enough hours in the day, the tools have been inadequate, and the cost of doing it right has been prohibitive.

What happens when AI changes that cost curve?

McKinsey's internal AI platform, Lilli, is used by 72% of their consultants. It handles over 500,000 prompts per month and saved approximately 1.5 million hours in 2025. Did McKinsey lay off 30% of their consultants? No. They redirected that capacity toward higher-value work: deeper client engagements, more complex analyses, faster delivery cycles. Accenture reported $3.6 billion in AI bookings for fiscal 2025, nearly doubling year over year. Professional services firms are not shrinking. They are growing, faster, because AI is making previously uneconomical work economical.

The data backs this up at a macro level too. Professional services leads all sectors in generative AI adoption, with implementation rates jumping from 33% in 2023 to 71% in 2024. Firms that have integrated AI effectively report 26-55% productivity gains and $3.70 in ROI per dollar invested. They are reclaiming 15-20 hours per week from administrative work, improving deliverable quality by 20-30%, and increasing effective capacity without growing headcount.

That last point is critical. "Increasing effective capacity without growing headcount" sounds like a job-destruction story if you are a pessimist. But look at what it actually means in practice: firms are taking on more projects, serving more clients, and tackling problems they previously could not afford to touch. The work was always there. The economics just did not support doing it.

What the professional services transition actually looks like

I can speak to this directly because we live it every day at Blue Orange Digital.

3 years ago, a mid-market company that wanted to build a production-grade data platform needed a team of 5-8 engineers working for 6-9 months. The cost put it out of reach for most companies below $500 million in revenue. Today, with AI-augmented engineering workflows, we can deliver comparable scope with smaller teams in compressed timelines. The cost has dropped. The quality has not.

Did that eliminate engineering jobs? No. It expanded the addressable market. Companies that could never have afforded a modern data stack are now building them. The total volume of data engineering work being done in the economy is growing, not shrinking, because the price point has come down to where more organizations can participate.

This is not unique to data engineering. It is the pattern across professional services. Legal teams are using AI to accelerate contract review, which means they are actually reviewing more contracts instead of rubber-stamping the ones they do not have time to read. Financial analysts are using AI to process larger datasets, which means they are catching risks they would have missed. Marketing teams are using AI to personalize communications at scale, which means they are actually engaging customers instead of blasting generic messages.

The consulting industry is shifting from project-based delivery to outcome-based AI initiatives. AI consulting alone is projected to hit 40% of revenue by 2026, up from 20% in 2024. The World Economic Forum projects a net gain of 78 million jobs globally by 2027-2030. Not the same jobs. Different jobs. Better jobs, in many cases.

The real risk is moving too slowly

The Citrini report asks the right question: what happens to economic stability when a foundational technology reshapes how work gets done? That is a question worth taking seriously. The Engels' Pause is real and worth studying. The distribution of gains between capital and labor deserves aggressive policy attention.

But the scenario Citrini constructs requires you to believe that AI adoption will be instantaneous, that no new work will be created to replace the work that is automated, and that the economy is a static system where eliminating one type of task means eliminating the need for human contribution entirely. None of those assumptions hold up against history or against the current data.

The bigger risk, the one I worry about as someone who builds these systems for a living, is not that AI will move too fast. It is that organizations will move too slowly. The companies and industries that refuse to adopt, that cling to legacy processes out of fear or inertia, are the ones that will face real economic pain. Not because AI destroyed their jobs, but because their competitors used AI to do more work, better work, and faster work, and left them behind.

Every major technology transition produces winners and losers. The losers are not the ones who adopt the technology. They are the ones who do not.

The work was always there. Now we can finally do it.

Josh Miramant
Josh Miramant
CEO

Founded and exited 2 venture-backed analytics companies, technical founder with deep cloud data expertise.