This article appears in the April 2025 issue of The American Prospect magazine. Subscribe here.
The week of Donald Trump’s inauguration, Sam Altman, the CEO of OpenAI, stood tall next to the president as he made a dramatic announcement: the launch of Project Stargate, a $500 billion supercluster in the rolling plains of Texas that would run OpenAI’s massive artificial-intelligence models. Befitting its name, Stargate would dwarf most megaprojects in human history. Even the $100 billion that Altman promised would be deployed “immediately” would be much more expensive than the Manhattan Project ($30 billion in current dollars) and the COVID vaccine’s Operation Warp Speed ($18 billion), rivaling the multiyear construction of the Interstate Highway System ($114 billion). OpenAI would have all the computing infrastructure it needed to complete its ultimate goal of building humanity’s last invention: artificial general intelligence (AGI).
Art for this story was created with Midjourney 6.1, an AI image generator.
But the reaction to Stargate was muted as Silicon Valley had turned its attention west. A new generative AI model called DeepSeek R1, released by the Chinese hedge fund High-Flyer, sent a threatening tremor through the balance sheets and investment portfolios of the tech industry. DeepSeek’s latest version, allegedly trained for just $6 million (though this has been contested), matched the performance of OpenAI’s flagship reasoning model o1 at 95 percent lower cost. R1 even learned o1 reasoning techniques, OpenAI’s much-hyped “secret sauce” to allow it to maintain a wide technical lead over other models. Best of all, R1 is open-source down to the model weights, so anyone can download and modify the details of the model themselves for free.
It’s an existential threat to OpenAI’s business model, which depends on using its technical lead to sell the most expensive subscriptions in the industry. It also threatens to pop a speculative bubble around generative AI inflated by the Silicon Valley hype machine, with hundreds of billions at stake.
Venture capital (VC) funds, drunk on a decade of “growth at all costs,” have poured about $200 billion into generative AI. Making matters worse, the stock market’s bull run is deeply dependent on the growth of the Big Tech companies fueling the AI bubble. In 2023, 71 percent of the total gains in the S&P 500 were attributable to the “Magnificent Seven”—Apple, Nvidia, Tesla, Alphabet, Meta, Amazon, and Microsoft—all of which are among the biggest spenders on AI. Just four—Microsoft, Alphabet, Amazon, and Meta—combined for $246 billion of capital expenditure in 2024 to support the AI build-out. Goldman Sachs expects Big Tech to spend over $1 trillion on chips and data centers to power AI over the next five years. Yet OpenAI, the current market leader, expects to lose $5 billion this year, and its annual losses to swell to $11 billion by 2026. If the AI bubble bursts, it not only threatens to wipe out VC firms in the Valley but also blow a gaping hole in the public markets and cause an economy-wide meltdown.
OpenAI’s Ever-Increasing Costs
The basic problem facing Silicon Valley today is, ironically, one of growth. There are no more digital frontiers to conquer. The young, pioneering upstarts—Facebook, Google, Amazon—that struck out toward the digital wilderness are now the monopolists, constraining growth with onerous rentier fees they can charge because of their market-making size. The software industry’s spectacular returns from the launch of the internet in the ’90s to the end of the 2010s would never come back, but venture capitalists still chased the chance to invest in the next Facebook or Google. This has led to what AI critic Ed Zitron calls the “rot economy,” in which VCs overhype a series of digital technologies—the blockchain, then cryptocurrencies, then NFTs, and then the metaverse—promising the limitless growth of the early internet companies. According to Zitron, each of these innovations failed to either transform existing industries or become sustainable industries themselves, because the business case at the heart of these technologies was rotten, pushed forward by wasteful, bloated venture investments still selling an endless digital frontier of growth that no longer existed. Enter AGI, the proposed creation of an AI with an intelligence that dwarfs any single person’s and possibly the collective intelligence of humanity. Once AGI is built, we can easily solve many of the toughest challenges facing humanity: climate change, cancer, new net-zero energy sources.
And no company has pushed the coming of AGI more than OpenAI, which has ridden the hype to incredible heights since its release of generative chatbot ChatGPT. Last year, OpenAI completed a blockbuster funding round, raising $6.6 billion at a valuation of $157 billion, making it the third most valuable startup in the world at the time after SpaceX and ByteDance, TikTok’s parent company. OpenAI, which released ChatGPT in November 2022, now sees 250 million weekly active users and about 11 million paying subscribers for its AI tools. The startup’s monthly revenue hit $300 million in August, up more than 1,700 percent since the start of 2023, and it expects to clear $3.7 billion for the year. By all accounts, this is another world-changing startup on a meteoric rise. Yet take a deeper look at OpenAI’s financial situation and expected future growth, and cracks begin to show.
To start, OpenAI is burning money at an impressive but unsustainable pace. The latest funding round is its third in the last two years, atypical for a startup, that also included a $4 billion revolving line of credit—a loan on tap, essentially—on top of the $6.6 billion of equity, revealing an insatiable need for investor cash to survive. Despite $3.7 billion in sales this year, OpenAI expects to lose $5 billion due to the stratospheric costs of building and running generative AI models, which includes $4 billion in cloud computing to run their AI models, $3 billion in computing to train the next generation of models, and $1.5 billion for its staff. According to its own numbers, OpenAI loses $2 for every $1 it makes, a red flag for the sustainability of any business. Worse, these costs are expected to increase as ChatGPT gains users and OpenAI seeks to upgrade its foundation model from GPT-4 to GPT-5 sometime in the next six months.
Financial documents reviewed by The Information confirm this trajectory as the startup predicts its annual losses will hit $14 billion by 2026. Further, OpenAI sees $100 billion in annual revenue—a number that would rival Nestlé and Target’s returns—as the point at which it will finally break even. For comparison, Google’s parent company, Alphabet, only cleared $100 billion in sales in 2021, 23 years after its founding, yet boasted a portfolio of money-making products, including Google Search, the Android operating system, Gmail, and cloud computing.
OpenAI is deeply dependent on hypothetical breakthroughs from future models that unlock more capabilities to boost its subscription price and grow its user base. Its GPT-5 class models and
19 Comments
cratermoon
Confirming Ed Zitron's careful analysis of the situation with SoftBank, Microsoft, and OpenAI: https://www.wheresyoured.at/optimistic-cowardice/.
"Outside of NVIDIA, nobody is making any profit off of generative AI, and once that narrative fully takes hold, I fear a cascade of events that gores a hole in the side of the stock market and leads to tens of thousands of people losing their jobs."
You'll note that the prospect story mentions Zitron multiple times.
bhouston
I do worry about the viability of OpenAI in particular. So much of its talent went to other firms which then built up amazing capaibilities like Anthropic with Claude. And then they also have the threat of OpenSource models like DeepSeek v3.1 and soon DeepSeek R2 while at the same time OpenAI is raising its prices to absurd levels. I guess they are trying to be the Apple of the AI world… maybe…
That said, I expect protectionist policies will be enabled by the US government to protect them and also X.AI/Grok from foreign competition, in particular Chinese.
elorant
One of the reasons I'd like this bubble to burst is to see Nvidia's stock collapse. I'm sick and tired of their exuberant prices.
MaxPock
We appreciate China for its strong push toward open-source AI. Without models like DeepSeek and Qwen, the U.S. was set to dominate AI with closed-source systems, charging tens of billions in rent every month while deciding who gets access based on politics.
"Hey, Eritrea, you're authoritarian—you can't use our democratic AI until you democratize."
"Hey, Saudi Arabia and Qatar, you're not authoritarian—you can have our AI."
Once again, thank you, Chairman Xi, for saving us from this nonsense.
bionhoward
Epictetus would hate this trend to outsource mental work
strict9
I'm usually skeptical of doomer articles about new technology like this one, but reluctantly find myself agreeing with a lot of it. While AI is a great tool with many possibilities, I don't see the alignment of what many of these new AI startups are selling.
It makes my work more productive, yes. But it often slows me down too. Knowing when to push back on the response you get is often difficult to get right.
This quote in particular:
>Surveys confirm that for many workers, AI tools like ChatGPT reduce their productivity by increasing the volume of content and steps needed to complete a given task, and by frequently introducing errors that have to be checked and corrected.
This sort of mirrors my work as a SWE. It does increase productivity and can reduce lead times for task completion. But requires a lot of checking and pushback.
There's a large gap between increased productivity in the right field in the right hands vs copying and pasting a solution everywhere so companies don't need workers.
And that's really what most of these AI firms are selling. A solution to automate most workers out of existence.
greatpostman
Calling everything a bubble is low iq. People that cannot understand society, or accept change, run with the bubble narrative at every turn
ausbah
still loads of money to be made in being the company hosting models on your fleet of GPUs. open source models and training paradigms definitely have undercut the proprietary model moat, but you need a good chunk of compute to run these models and not everyone has or wants that compute themselves
jbreckmckye
This journalist, Ed Zitron, is very skeptical of AI and his arguments border on polemic. But I find his perspective interesting – essentially, that very few players in the AI space are able to figure out a profitable business model:
https://www.wheresyoured.at/core-incompetency/
fullshark
The end result of this wave looks increasingly like will get us an open web blogspam apocalypse, better search / information retrieval, better autocomplete for coders. All useful (well useful to bloggers/spammers at least), not trillions of dollars in value generated though.
Until a new architecture / approach takes root at least.
lukev
It's really hard to accurately assess the possibilities granted by LLMs, because they just feel like they have so much potential.
But ultimately I think Satya Nadella is right. We can speculate about the potential of these technologies all we want, but they are now here. If they are of value, then they'll start to significantly move the needle on GDP, beyond just the companies focused on creating them.
If they don't, then it's hype.
softwaredoug
I worry about the cultural shift in Tech to "what have you done for me lately" over patient innovation. Due to no more ZIRP, due to a shift to very top-down management, narcissistic CEO bros, and the new focus to please investors over all else… There's little appetite for actual innovation, which would require IMO a different culture and much more trust between management and employees. So instead, there's top-down AI death marches to "innovate" because that's the current trend.
But who is DEFINING the trend? Who is actually trying to stand out and do something different?
There's glimmers of hope in tiny bootstrapped startups now. That seems to be the sweet spot of not needing to obsess about investor sentiment, and instead focus on being lean and having a small team with the trust to actually try new things. Though this time with a focus early profitability where they can dictate terms to investors, not the other way around.
Ologn
If I look at Nvidia stock from mid-June of last year, or the IYW index (Apple, Microsoft, Facebook, Google) – NVDA is down 10%, IYW is down maybe 2-3%. It doesn't feel like I'm in the middle of a huge bubble like, say, the beginning of 2000.
danans
> But the reaction to Stargate was muted as Silicon Valley had turned its attention west. A new generative AI model called DeepSeek R1, released by the Chinese hedge fund High-Flyer, sent a threatening tremor through the balance sheets and investment portfolios
You gotta love how this paragraph reads like an unfolding battle scene from a Tolkien novel.
keiferski
I can't say for all possible implementations, but IMO (from industry experience) the content and consumer-focused benefits of AI/LLMs have been very much over-hyped. No one really wants to watch an AI-generated video of their favorite YouTuber, or pay for an AI-written newsletter. There is a ceiling to the direct usefulness of AI in the media industry, and the successful content creators will continue to be personality-driven and not anonymously generic.
Whether that also applies to B2B is a different question.
Imustaskforhelp
I think AI is a bubble in the same sense that airline industry are.
Airline industry are notoriously hard to be profitable. (I heard it from the intelligent investor book)
So just because something is useful doesn't necessarily means that its profitable yet the VC's are funding it expecting such profitability without any signs of true profitability till now.
I mean, yes AI is a profitable, but most of the profitability doesn't come from real use case, but rather
the majority of the profitability comes from the (just slap AI sticker to increase your company valuation), and that's satisfying the VC right now. But they want returns as well.
And by definition if their returns is that a bigger fool / bigger VC is going to fund the AI company at a higher evaluation without too much profitability / very little profitability. Then THAT IS BUBBLE.
But being a bubble doesn't mean it doesn't have its use cases. AI is going to be useful, its just not going to be THAAT profitable, and the current profits are a bubble / show the characteristics of it.
h4ny
That's a great article and a lot of the comments seem to resonate with the article. But somehow this is disappearing from the front page faster than anything else, it's hard not to think that "this is bad for business, so it must go"…
cubefox
In the last few years we have seen unprecedented progress in AI. Relatively recently, LLMs like ChatGPT were regarded as pure science fiction. Current text-to-image models? Unthinkable. And then people still try to argue that it is just a bubble. People have the concerning tendency not to learn from evidence they previously judged as being extremely unlikely. The evidence is now clearly indicating that humanity is on the cusp of developing superhuman general intelligence. The remaining time is probably measured in years rather than decades or centuries.
Nazzareno
"It's evident that while AI presents transformative potential, the surrounding financial speculation warrants caution. The challenge lies in distinguishing between genuine technological advancements and market hype. As the industry evolves, a balanced approach that values innovation while remaining vigilant about speculative investments will be crucial to navigate the AI landscape effectively."
{comment by ChatGPT after reading the article and all the comments here}