bly bad is that it has no idea that there is a world about which it is mistaken.”
The inevitable result is errors of different sorts, the most damaging of which are“ hallucinations”— statements that sound plausible but describe things that don’ t actually exist. This is where context becomes critical: In business settings, tolerance for error is already low and approaches zero when the stakes are high.
Code generation is a prime example. Software used in financially or operationally sensitive environments must be rigorously tested, edited, and debugged. A junior programmer equipped with generative AI can produce code with remarkable speed. But that output still requires careful review by senior engineers. As numerous anecdotes circulating online suggest, any productivity gained at the front end can disappear once the resources needed for testing and oversight are taken into account. The Bulwark’ s Jonathan Last put it well:
“ AI is like Chinese machine production. It can create good outputs at an incredibly cheap price( measuring here in the cost of human time). Which means that AI— as it exists today— is a useful tool, but only for tasks that have a high tolerance for errors … if I asked ChatGPT to research a topic for me and I incorporated that research into a piece I was writing and it was only 90 percent correct, then we have a problem. Because my written product has a low tolerance for errors.”
In her new book The Measure of Progress, University of Cambridge economist Diane Coyle highlights another major concern: AI’ s opacity.“ When it comes to AI,” she recently wrote,“ some of the most basic facts are missing or incomplete. For example, how many companies are using generative AI, and in which sectors? What are they using it for? How are AI tools being applied in areas such as marketing, logistics, or customer service? Which firms are deploying AI agents, and who is actually using them?”
The Inevitable Reckoning
This brings us to the central question: What is the value-creating potential of LLMs? Their insatiable appetite for computing power and electricity, together with their dependence on costly oversight and error correction, makes profitability uncertain. Will business customers generate enough profitable revenue to justify the required investment in infrastructure and human support? And if several LLMs perform at roughly the same level, will their outputs become commodified, reducing token production to a low-margin business?
From railroads to electrification to digital platforms, massive up-front investment has always been required to deliver the first unit of service, while the marginal cost of each additional unit rapidly declined, often falling below the average cost needed to recover the initial investment. Under competitive conditions, prices tend to gravitate toward marginal cost, leaving all competitors operating at a loss. The result, time and again, has been regulated
LLMs have a narrow window to prove their economic value and justify such extraordinary levels of investment.
monopolies, cartels, or other“ conspiracies in restraint of trade,” to borrow the language of the Sherman Antitrust Act.
There are two distinct alternatives to enterprise-level LLM deployment. One lies in developing small language models— systems trained on carefully curated datasets for specific, well-defined tasks. Large institutions, such as JPMorgan or government agencies, could build their own vertical applications, tailored to their needs, thereby reducing the risk of hallucinations and lowering oversight costs.
The other alternative is the consumer market, where AI providers compete for attention and advertising revenue with the established social-media platforms. In this domain, where value is often measured in entertainment and engagement, anything goes. ChatGPT reportedly has 800 million“ weekly active users”— twice as many as it had in February. OpenAI appears poised to follow up with an LLM-augmented web browser, ChatGPT Atlas.
But given that Google’ s and Apple’ s browsers are free and already integrate AI assistants, it is unclear whether OpenAI can sustain a viable subscription or pay-per-token revenue model that justifies its massive investments. Various estimates suggest that only about 11 million users— roughly 1.5 % of the total— currently pay for ChatGPT in any form. So, consumer-focused LLMs may be condemned to bid for advertising revenue in an already-mature market.
The outcome of this ongoing horse race is impossible to call. Will LLMs eventually generate positive cash flow and cover the energy costs of operating them at scale? Or will the still-nascent AI industry fragment into a patchwork of specialized, niche providers while the largest companies compete with established social-media platforms, including those owned by their corporate investors? As and when markets recognize that the industry is splintering rather than consolidating, the AI bubble will be over.
Ironically, an earlier reckoning might benefit the broader ecosystem, though it would be painful for those who bought in at the peak. Such a deflation could prevent many of today’ s ambitious data-center projects from becoming stranded assets, akin to the unused railroad tracks and dark fibers left behind by past bubbles. In financial terms, it would also preempt a wave of high-risk borrowing that might end in yet another leveraged bubble and crash.
Most likely, a truly productive bubble will emerge only years after today’ s speculative frenzy has cooled. As the Gartner Hype Cycle makes clear, a“ trough of disillusionment” precedes the“ plateau of productivity.” Timing may not be everything in life, but for investment returns it pretty well is.
WILLIAM H. JANEWAY is a distinguished affiliated professor in economics at the University of Cambridge and author of Doing Capitalism in the Innovation Economy( Cambridge University Press, 2018). © Project Syndicate
DECEMBER 2025 | FINANCIAL ADVISOR MAGAZINE | 23