Beyond the Boom: Understanding AI’s True Economic Impact and Hidden Financial Danger

We often claim to fear the future, yet live as though it won’t arrive. Nowhere is this contradiction more vivid than in our response to transformative technologies like artificial intelligence. Grand pronouncements of existential risk coexist with long-term investments, family planning and business as usual. This isn’t simple hypocrisy; it’s a deeper pattern — one we’ve seen before.

When automobiles first emerged, critics warned of chaos, unemployment and urban collapse. Some even predicted moral decline. And yet, the same people bought cars, built highways and reshaped their lives around the machine they claimed to fear. We saw similar patterns with personal computers, the Internet and more recently, 3D printing: breathless proclamations of revolution, followed by quietly adaptive behavior.

In moments of radical uncertainty, we don’t act on belief alone. We fall back on habit, intuition and social signaling. As philosopher Michael Polanyi argued, tacit knowledge — what we know but cannot quite say — often guides our choices more than explicit reasoning. Thus, even self-proclaimed AI doomers invest in college funds. Risk, filtered through the lens of culture, becomes not just a calculation, but a posture.

This disconnect between stated belief and lived behavior suggests mismeasurement of risk, time and ourselves.

This raises concerns about the ideological dimensions of AI optimism. Some thinkers argue that techno-solutionism revives a kind of central-planning mindset — one that mirrors historical overconfidence in expert-led, algorithmic optimization. The idea that an AI could mediate political conflict or design ideal public policy reflects an underappreciation of complexity, decentralized knowledge and human agency.

AI-driven policymaking often assumes that people are passive objects to be optimized, overlooking the fact that human beings are active agents who negotiate, interpret and shape political systems. Political conflicts rarely revolve solely around measurable “outcomes;” they are deeply tied to identity, recognition and legitimacy. An AI “mediator” proposing mathematically optimal compromises may fail because it cannot capture the emotional dimensions of political life.

Moreover, public policy inherently involves moral trade-offs: balancing equity against efficiency, security against privacy and growth against sustainability. Delegating such judgments to algorithms risks outsourcing morality to systems that may mimic ethical reasoning from training data but lack genuine normative judgment or accountability. In liberal democracies, legitimacy derives from public deliberation and consent, not technical optimization. Even if an AI model could reliably maximize “overall welfare,” the perception of unfairness could erode trust and destabilize society.

In my opinion, AI is best used to inform decisions, not make them. While AI can enhance efficiency and illuminate trade-offs, in order to promote civic participation and legitimacy among humans, AI should not replace the collective reasoning, moral responsibility and negotiated consent that underpin current governance.

The elusive AI boom

As artificial intelligence develops at breakneck speed, economists are once again grappling with a familiar yet increasingly intricate question: Will this transformative technology lead to real, measurable gains in productivity and GDP?

Economists Daron Acemoglu, David Autor and Christina Patterson argue that AI’s productivity gains may be modest. They project just 0.05–0.07% annual productivity growth from AI, citing limitations in replacing nuanced human tasks. Meanwhile, the global management consulting firm McKinsey & Company and the investment banking company Goldman Sachs offer more bullish estimates: up to 3.4% with full diffusion and 7% global GDP growth over a decade, respectively.

This is not a new problem. Economist Robert Solow famously joked in 1987 that “you can see the computer age everywhere but in the productivity statistics.” Even as computers and the Internet spread, many countries saw little growth in output per worker. That same puzzle may apply to AI. That history suggests AI’s diffusion may likewise deliver less transformative productivity growth than its most enthusiastic advocates expect.

Historical analogies illuminate this conundrum. The steam engine had high sectoral productivity but low macro impact due to narrow diffusion. By contrast, the information and communications technology revolution was broad-based, leading to wider gains. AI’s ultimate impact will similarly depend on its “factor share” in the economy: how much output it touches, how widely it diffuses and whether firms invest in complementary human and organizational capital.

Despite the striking capabilities of large language models (LLMs), generative design tools and autonomous agents, clear signs of an AI-fueled macroeconomic boom remain elusive. This gap between technological potential and observable economic outcomes invites a deeper question: Are we still too early in the adoption cycle? Or are our current measurement frameworks simply failing to capture where the change is already occurring?

A growing body of work, including “The Economics of Artificial Intelligence” by authors Ajay Agrawal, Joshua Gans and Avi Goldfarb, frames AI as a General-Purpose Technology (GPT) akin to electricity or the Internet — a platform that spreads across sectors, spurs complementary innovation and improves over time. Yet, as the book notes, GPTs have historically produced slow and uneven gains at the macro level. The authors stress the importance of rethinking economic indicators and developing new tools for tracking how AI reshapes growth beneath the surface.

In “Capitalism without Capital”, economists Jonathan Haskel and Stian Westlake argue that today’s economy has become increasingly intangible. Value creation now relies on assets like data, software, organizational knowledge and brand equity — resources that are difficult to measure and often underrepresented in traditional productivity statistics. While AI may be amplifying these intangible forms of capital, the value it generates doesn’t always translate neatly or immediately into measurable gains like profits or output. Lags in diffusion, accounting conventions and the complexity of attributing returns in multi-layered organizations may obscure AI’s true impact, even if it is ultimately reflected in firm performance over time.

AI as technology and meta-technology

Economists Timothy Bresnahan and Manuel Trajtenberg introduced the formal concept of GPTs: innovations that spread broadly and catalyze follow-on advances across the economy. AI fits this mold, but with a twist. As economist Zvi Griliches described in the notion of “innovation in the method of invention,” AI is a recursive technology: It accelerates the process of innovation itself. When LLMs write code, simulate molecular structures or improve their own training, they do more than perform tasks — they reinvent the tools of creation.

This dual nature of AI complicates its economic role. It is both an output of research and development and an input into the next wave of innovation. GPT-4o, for instance, is not just a model; it is infrastructure for further experimentation. This recursive capacity blurs the boundary between labor and capital, producer and tool, and makes growth increasingly endogenous to the system itself.

How do we know if AI is actually changing the economy?

Workers everywhere make bold claims about AI transforming industries and driving economic growth. So far, the macroeconomic data tells a more muted story. Productivity growth remains sluggish, and it’s not yet clear whether AI is making a real impact or if its benefits are simply slow to materialize.

To answer this, we need to look beyond aggregate productivity metrics and examine the microeconomic signals of transformation. Is AI adoption concentrated among a handful of “frontier firms” — businesses that strategically integrate AI agents into their core operations to achieve scaled transformation, agility and accelerated growth — that are already seeing performance gains? Is there sustained growth in investment not just in hardware, but in critical intangible assets like software, data infrastructure and worker training? These investments are often early indicators of technological diffusion, even if they don’t show up cleanly in traditional GDP measures.

A more precise assessment of AI’s economic impact begins by asking two questions. First, who is adopting it? If AI usage remains concentrated among a narrow group of frontier firms already realizing performance gains, the pace of diffusion may be far slower than headline narratives imply. Second, what are the adopters investing in?

Persistent growth in both hardware and complementary intangible assets, such as software, data infrastructure and workforce training, signals deeper adoption and helps lay the foundation for future productivity gains. Equally important is what’s happening within innovation pipelines, where early-stage breakthroughs can foreshadow more widespread economic effects.

Yet each of these indicators requires careful interpretation. Rising investment may reflect anticipatory positioning rather than realized returns: Firms may commit heavily to AI infrastructure because they expect others to do so, or out of a fear of competitive obsolescence, rather than from demonstrated efficiency gains. In such cases, the same metric can capture both genuine transformation and speculative overreach.

For policymakers and corporate strategists, distinguishing between these dynamics is essential. This requires looking beyond headline investment figures and evaluating supporting evidence: Are firms restructuring workflows, training workers and deploying AI at scale in ways that improve measurable outcomes? Or are they primarily accumulating infrastructure in anticipation of future gains? Comparing firm-level performance data, sectoral adoption patterns and workforce adjustments can help separate substantive transformation from speculative positioning.

On the innovation front, the signals are nonetheless striking. Global patent offices received approximately 35,000 AI-related filings in 2024, more than double the 15,000 recorded in 2018. AI-assisted research is producing breakthroughs across diverse fields — from protein folding to semiconductor design and pharmaceutical development — while cross-disciplinary AI publications in leading journals have more than tripled since 2017. Such developments may represent the initial swell of a broader productivity wave.

Ultimately, capturing AI’s true economic footprint demands a shift away from reliance on aggregate productivity or GDP figures alone. Taking a granular focus on diffusion patterns, firm-level performance and early innovation signals offers a more accurate reading of both realized progress and latent potential. Absent this nuance, we risk either undervaluing AI’s transformative capacity or inflating expectations on the basis of still-unrealized promise.

The financial strain of the AI infrastructure race

Over the past two years, Big Tech’s financial model has undergone a profound shift. Historically, firms such as Alphabet, Amazon, Meta and Microsoft thrived on “asset-light” operations. They were intellectual property and network-effect platforms that scaled revenue without proportionate increases in capital expenditures. This model yielded high free-cash flow — the real money a company generates from operation after covering all expenses and investments — and valuations underpinned by low interest rates. The AI revolution has upended that formula. Since early 2023, inflation-adjusted investment in information-processing equipment has surged 23%, compared with only 6% GDP growth. This spending — dominated by GPUs, servers, networking gear and vast energy-hungry data centers — has accounted for more than half of US GDP growth in recent quarters, offsetting stagnant consumer demand.

Big Tech’s capital deployment is impressive in scale. Microsoft and Meta now allocate over one-third of sales to capital expenditures, contributing to a combined $102.5 billion in quarterly spending by the “Magnificent 7” stocks — Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia and Tesla — most of it concentrated in four hyperscalers. Free cash flow is falling sharply — down 30% since 2023 — even as net income rises on the strength of legacy businesses such as advertising and consumer devices. This divergence underscores a key uncertainty: While AI’s long-run productivity potential is widely acknowledged, the near-term financial returns from AI-specific infrastructure remain speculative. Current valuations effectively price these new, capital-heavy operations as if they will be as profitable as the old, asset-light models.

To draw instructive historical parallels, the late-1990s Internet and telecommunications build-out left behind valuable infrastructure but led to the bankruptcy of many of its builders — companies like WorldCom and Global Crossing — whose aggressive spending outpaced sustainable revenues. Today’s AI expansion differs in that the main players are mature, cash-generating incumbents, and current demand for computing power exceeds supply.

However, the sustainability of present investment levels depends on optimistic revenue trajectories. A failure to achieve these would not only strain corporate balance sheets but could also dampen macroeconomic growth, especially as fiscal deficits, above-target inflation and the Federal Reserve’s balance-sheet contraction — when it reduces its holdings and draws liquidity from the financial system — push long-term interest rates structurally higher than in the post-global financial crisis environment.

Credit, correlation and crisis risk

The financing of the AI build-out is increasingly as important to assess as its technological promise. Six principal funding channels dominate: internal cash flows, debt issuance, equity, venture and private equity capital, structured leasing and asset-backed vehicles and cloud consumption commitments. While bond issuance by investment-grade tech firms is growing — this April, Alphabet issued its first bonds since 2020 — the more opaque surge is in private credit. Private credit funds, often financed by bank loans and insurance company capital, now channel billions into AI infrastructure. In August, Meta borrowed $29 billion from private-credit lenders, while CoreWeave and other GPU cloud providers have collateralized Nvidia chips to secure funding.

Systemic risk arises when sector-specific credit expansion interacts with correlated defaults. Historical evidence, notably from the Jordà-Schularick-Taylor Macrohistory Database, shows that crises are far more damaging when credit growth and asset bubbles coincide. While the dot-com collapse of 2000 inflicted equity losses without destabilizing banks — owing to limited bank exposure — today’s private credit expansion is closely intertwined with systemically important institutions. It seems that banks now provide a substantial share of liquidity to private credit lenders, exposing themselves indirectly to the higher risk profiles of these loans. If AI-related borrowers falter simultaneously, even short-term, senior bank loans could incur unanticipated “tail risk” losses — the probability that investment returns deviate more than three standard deviations from the mean.

Insurance sector exposure adds another layer of vulnerability. Life insurers’ holdings of below-investment-grade corporate debt, much of it linked to private credit, now exceed their pre-2008 subprime mortgage-backed securities exposure. Sectoral focus amplifies this concentration risk: AI data centers and related infrastructure dominate the current private credit pipeline. While insurers are less central to payment systems than banks, their investment losses could propagate through capital markets and derivative exposures, as seen in the 2008 collapse of the company American International Group. The combination of concentrated sectoral lending, opaque credit channels and systemic lender entanglement suggests that while an AI-driven 2008-scale crisis is not imminent, the structural preconditions for financial instability are emerging and warrant early macroprudential scrutiny.

 Hype, diffusion and the hidden fault lines of the AI economy

The AI boom thus carries a familiar duality. Like past general-purpose technologies, its ultimate economic contribution will depend less on headline capabilities than on the slow, uneven process of diffusion, the build-up of complementary assets and the ability of our measurement systems to register change where it happens. In the meantime, the infrastructure race has transformed Big Tech’s balance sheets and redirected vast pools of capital into data centers, chips and power supply — investments whose near-term returns remain uncertain. That uncertainty is magnified by the financial architecture now forming around the boom: private credit channels, insurer portfolios and bank exposures that could transmit sector-specific shocks more widely than the dot-com collapse ever did.

History suggests that transformative technologies rarely arrive as singular, economy-wide jolts. They seep in, reshape processes and eventually reorder industries, often in ways that defy early forecasts. Policymakers, investors and firms face the challenge of distinguishing hype from substance without underestimating the slow compounding of genuine innovation. The greatest risk may not be that AI fails to transform the economy, but that in preparing for its promise, we create new financial and systemic vulnerabilities that outpace the technology itself.

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

The post Beyond the Boom: Understanding AI’s True Economic Impact and Hidden Financial Danger appeared first on Fair Observer.



from Fair Observer https://ift.tt/IrQDhTA

0 Comments