The Honeymoon Rate: AI Echoing the Dot-Com Boom/Bust
TL;DR: AI is real. Current AI pricing is not. The technology is new. The business model dynamics are not. Adopt aggressively, commit cautiously, and build everything to survive a provider change, a price correction, or the startup you depend on disappearing.
We've seen this movie before
In my conversations with enterprise leaders over the past year, I keep seeing two failure modes. Smart people, under board pressure to show AI adoption, are making fast commitments on unstable ground. They sign contracts, build dependencies, and defer the hard questions about what happens when the pricing changes. The pressure to show progress is producing decisions optimized for the next board deck, not for the next five years.
Other organizations demand FedRAMP authorization from two-year-old startups, five-year support commitments from providers that won't be profitable for four, and compliance guarantees against regulations that haven't been written yet. By the time the requirements document is finished, the technology has moved on. These organizations aren't being prudent. They're applying procurement frameworks built for a stable market to a market that is anything but.
Both groups are making the same error from opposite directions: treating a volatile, immature market as though it were stable.
This is genuine technology following a well-documented market cycle. The technology is new. The money isn't. VC-funded market capture, subsidized pricing, dependency building, and eventual correction: we've run this playbook with cloud computing, with ride-sharing, with the original dot-com platforms.
You're not paying the real price for tokens
Nobody has had to charge the real price of a token yet. OpenAI won't be profitable until at least 2030. The investor documents are public and they all say the same thing: the leading AI providers are selling below cost to capture the market. This isn't controversial. It's the stated strategy.
Every cost-benefit analysis I've reviewed uses today's token pricing as the baseline. Today's pricing is a promotional rate. That's not a reason to avoid AI. It is a reason to be skeptical of any business case that treats current pricing as permanent.
Uber didn't disrupt taxis: it subsidized them until the real price came back
Uber entered markets by sidestepping taxi regulations and using venture capital to subsidize rides. In Madrid in 2014, Uber's hourly subsidy to drivers was nearly twice the fare it charged riders. Internal presentations described this as "buying revenue."
Then Uber went public. Prices climbed roughly 18% per year. Drivers took home 12% less per trip. Nobody at Uber decided to make the service worse. Going public means the investors who funded the subsidy expect their return. The squeeze is structural.
OpenAI and Anthropic are both expected to IPO this year. When they do, the same structural pressure arrives. The investors who funded $207 billion in compute aren't philanthropists. That return comes from somewhere.
Automating a bad process just makes it fail faster
Rory Sutherland called it the Doorman Fallacy in Alchemy (2019). A hotel replaces its doorman with an automatic door. Two years later the hotel's reputation has tanked, because the doorman wasn't just opening the door. He was greeting regulars, providing security, hailing taxis, and signaling that the hotel valued its guests enough to put a human at the entrance. The consultant defined the role by its most visible task and destroyed everything else.
This shows up in AI-driven "efficiency" initiatives. A company replaces its sales intake team with a chatbot that collects the same information. But the sales team wasn't just collecting information. They were reading the customer, qualifying fit, and building the relationship that carries the account through the first product hiccup. The chatbot does the visible task. The invisible work disappears.
There's also the Office Space problem. Tom Smykowski's job, taking specs from customers to engineers, exists because two groups that should talk to each other don't. An AI agent doing that job faster doesn't fix the dysfunction. It makes the dysfunction cheaper to maintain, which means nobody will ever fix it.
These problems compound with the pricing problem. You're losing hidden value and underestimating the true cost of the replacement.
The enterprise implementation landscape has no map
I've spent my career securing technology for large enterprises. The current state of AI adoption is messy in ways the hype cycle doesn't cover.
95% of enterprise GenAI pilots fail to deliver measurable ROI. AI startups fold at a 90% rate. Every procurement and security team I talk to is solving the same problems from scratch because there are no settled implementation standards. If you build on a startup's tooling and that startup folds (the statistically likely outcome), you're doing a rip-and-replace on a system your organization now depends on.
Even the frontier labs present risk. What does "enterprise support" look like from a company projecting $74 billion in operating losses for 2028? When cloud computing was at this stage, at least the major providers had predictable business models. In the AI market, even the leaders haven't proven theirs work at scale.
Adopt aggressively, commit cautiously
None of this means don't use AI. Sitting this out has its own costs. But there's a difference between using a technology and betting your operations on its current pricing, providers, and implementation patterns all remaining stable.
Use everything, depend on nothing you can't replace. Build provider independence into your architecture. AWS Bedrock's Converse API lets you switch between foundation models with a parameter change. That's an architectural pattern, not a vendor recommendation. The principle is abstraction layers that let you leave without rebuilding.
Apply the Doorman test to every efficiency decision: are you automating a task, or destroying a capability? Run a 3x test on every AI dependency: what happens if the price triples or the provider disappears? Write every business case assuming current pricing is a promotional rate. Because it is.
The dot-com era didn't punish people for using the internet
The dot-com correction didn't kill the internet. It killed commitments that couldn't survive the transition from speculative pricing to real economics. The companies that signed exclusive deals with Yahoo in 1999 got burned. So did the companies that sat it out and decided the internet was a fad.
Imperfect action beats inaction. The companies that came out of the dot-com era strongest moved early and built to survive the correction. They tried things, kept what worked, dropped what didn't, and never locked themselves to a single provider or architecture.
AI will have its Googles and its Yahoos. Build so you benefit either way.
References
- Rory Sutherland, Alchemy (2019) — the Doorman Fallacy
- Cory Doctorow, "Enshittification" (2023) — platform decay as structural pattern
- Reid Hoffman and Chris Yeh, Blitzscaling (2018) — subsidized market capture
- Fast Company / Wall Street Journal — OpenAI and Anthropic financial projections (2026)
- HSBC Global Investment Research — OpenAI funding shortfall analysis (2025)
- MIT FutureTech — 95% GenAI pilot failure rate (2025)
- National Employment Law Project — Uber post-IPO pricing data (2025)
No comments:
Post a Comment