More and more commentators talk about and warn of an “AI bubble”, and everybody seems to congratulate each other on being such a smart financial analyst. BUT: A bubble pops and you are left with air and maybe a splash of soap somewhere on the floor. A fairly clean affair. This kind of investor speak obscures the severe consequences economic crashes cause, coming from someone’s point of view for whom this is more likely to be a spectacle than a direct threat.
When the “AI” market crashes, there will be NO “reset button”, NO “rollercoaster” continuing on an orderly path after having come down, NO “bubble” that just lets off hot air. These are all metaphors that heavily misrepresent what it means for markets to crash, or, as they say, “correct”. We might be in for a long and painful struggle to at least reduce the grip of “AI” on current core societal functions like government administration, education and research funding. In this article, I want to illustrate the broad range of costs that BOTH the buildup of “AI” overvaluations AND their coming down will have. The current “AI” investments will have long-term costs by creating significant path dependencies: They make harmful things cheaper, speed up the commodification of human labour and shift social norms. Just to be clear: I am referring to the current “AI” boom which is driven mostly by generative AI (“genAI”) applications, not necessarily the things that have been around for decades (e.g. various forms of pattern recognition) and that did not induce companies to spend hundreds of billions on data centres.
To better understand what is going on, let’s first look at the outcomes of previous instances of overinvestment, including the 2000 dot-com “bubble” and the significant piles of money Uber burnt for many years, before turning to contemporary “AI” path dependencies.
Overinvestments shape technological paths
Let’s start with the obvious: The so-called dot-com boom crashed between 2000 and 2002 – this already hints at the fact that “bubble bursting” is a long period during which no one knows when it will end. When it did end, investors lost money, and it is mostly their perspective that was covered in the media (and they whined about a crash being less bad than the pain inflicted by missing out on a boom). Many people lost their jobs, their livelihoods, and needed to find other ways to make ends meet (developers on Reddit gave an account, but they were probably among the more privileged). Unemployment in the US increased from about 4% to almost 6%.
What might be a little less obvious: A few key developments that the dot-com boom had started persisted long after. While the internet was still a mostly academic affair until the 1990s, the dot-com boom kicked off the scale-over-everything, ad-based internet we know today. We saw alignment of advertising business and finance as well as a massive drive to consolidation during the crash. Google, eBay, Amazon, Nvidia, they all became central players in the commercial internet. What now seems inevitable to most people seemed coincidental before the boom – but then today’s driving forces crowded out most other, less commercial forms of existing on the internet.
“AI” investments make harmful things cheaper, speed up the commodification of human labour and shift social norms.
The investment logic has shaped the internet ever since: Uber accumulated almost $34bn USD in losses (excluding losses while it was not yet public between 2009 and 2014) before it started to generate profits in 2023. (And also various other unicorns incur massive losses they may never recoup.) Money also creates habits and legitimises actions: Uber “broke laws, duped police and secretly lobbied governments”, as the Guardian titled in 2022 and its controversies got their own Wikipedia page. But also their very official business model is based on a) normalising precarious working conditions by eroding labour protections as they fought countless lawsuits over giving their workers only freelance and not an employee status, thereby avoiding sick pay, holidays etc., and b) making human work seem more automated and less visible by mediating passengers and drivers through an algorithm in an app, reducing the need for actual human interaction. Both developments shift social norms in ways that are likely to be profitable for businesses even beyond Uber: Uber’s mission can be understood as reducing the value of workers and human interaction, and with that long-term goal it is commercially rational to rack up significant losses in the short to medium term.
The “AI” path dependencies we will not correct
It is plausible to expect something similar to happen with “AI”. The ongoing investment boom is creating significant overcapacity with effects lasting long after many “AI” startups have gone bust. These range from very visible and direct to the more indirect and structural.
Lower costs for compute and energy: In order to sustain the boom, investments follow projections of endless “AI” growth, which translate into hundreds of billions currently being invested into data centre construction, alongside an expansion of energy infrastructure. And once they are built, it does not make sense to stop them, does it? Keeping them running is much cheaper than constructing them. This puts us on a path of energy-intensive technology even once these data centres are no longer needed for “AI” applications. This eradicates any incentive for resource-efficient coding or low-computation technology. At the same time, this infrastructure is not costless to maintain (to my knowledge, chips need to be replaced about every 7 years) – but possibly that cost is still lower than doing anything else, hence continuing on that trajectory will remain cheaper than alternatives for quite some time to come. These artificially low costs of compute and energy will be even further away from the “real” costs when factoring in just the environmental harms they produce. That is very bad news for anybody still hoping for a combined digital and environmental transition.
Sectoral knowledge destruction: Some people may get used to using “AI” for a variety of tasks, even where this is neither actually helpful nor profitable for the “AI” providers. As is widely reported, managers often encourage or force their employees to e.g. code using genAI applications, university students use genAI applications for writing, public bodies are continuing to move onto fancy “AI” clouds and forget how to do on-premise computing, and we are likely to see more diffusion before the boom crashes. Just as Uber’s mission was broader than just individual transport, “AI” has an inbuilt contempt for human interaction (as it is built to automate speech while avoiding any interpersonal friction) and workers (as it seeks to make them even more interchangeable and subordinate to machine processes). Hence, using “AI” often destroys established processes of developing skills and sharing knowledge. Rebuilding them will take much longer and possibly cost more than might have been saved in the meantime.
Again more economic inequality: An economic crisis is not equally dangerous for everyone. Only few companies are benefitting from the “AI” boom and most of the stock market gains in recent years were driven by Big Tech valuations going absolutely through the roof. However, we can expect that the losses will be shared more widely, based on the experience of past financial crises. Big Tech is trying to portray itself as too big to fail, which means that their systemic relevance would prompt governments to inject tax money to reduce any losses they might incur. A financial crisis has ripple effects that go far beyond the market in question – just as the 2008 US housing crisis did not only lead to people losing their homes, but caused a huge recession with a stark increase in unemployment and financial instability.
Why “AI” overinvestments might take a long time to unwind
There is no way of knowing when the “AI” boom is likely to crash – that is the whole point of markets that are supposed to create collective rationality from individual choices. I see a few reasons to suspect an even longer and more painful struggle than the dot-com crash in 2000. First, the boom is orchestrated or arguably planned by a handful of extremely powerful companies. Being few increases the scope to act strategically. Second, these companies are very close not only to the US government, but through their start-up investments also to governments across the world, selling “AI” promises and lies to politicians. The push of small and large “AI” firms into military tech aggravates this dynamic: It is the area in which talking of an “AI race” carries quite intuitive meaning because having more destructive power translates into military power, though not necessarily into better societal outcomes. And third, the intention of reaching systematic relevance is bearing some fruit as more and more institutions are becoming financially invested into “AI success”. Not only VC investors, but large parts of society including life insurance and pension funds will bear the cost of its failure, giving them an incentive to prolong the boom at fairly high costs.
There are a few reasons to suspect an even longer and more painful struggle than the dot-com crash: market concentration, closeness to governments, and financial actors being invested into “AI success”.
What to do and why not to despair
It is important to analyse the abyss, but don’t stare into the abyss, as Jathan Sadowski sometimes says on my currently favourite podcast This Machine Kills. Understanding what is happening is essential to figure out a plan and I am keen to do that with others (i.e. I am aware my suggestions are insufficient). Anything that contributes to not making the boom bigger than it needs to be is helpful (e.g. do not invest into “AI”, tell your friends not to, do not use genAI applications or pay for them). Anything that helps us to talk in less delusional terms about what is going on is helpful (e.g. do not join those talking about “bubbles” suggesting they are merely financial events or that have one moment of coming down after which everything will be okay again). And let’s try to preserve that knowledge that companies are keen to replace with “AI”. We will need it.
Photo by Maxim Hopman on Unsplash

