Last October, there was a long line outside an AI developer conference in San Francisco on a foggy morning. Hoodies, laptops, quiet discussions about training runs and tokens. At first glance, it appeared to be just another tech event. However, there was a different tone in the air, one that was more urgent and almost competitive, feeling less like teamwork and more like an already-started race.
OpenAI, a company that has successfully transformed a research lab into something more akin to infrastructure, is at the center of that race. Its models are more than just tools; they are platforms that are integrated into workflows, software, and even everyday discourse. The scale is difficult to overlook, with hundreds of millions of users interacting each week. However, competitors are not retreating, possibly as a result of this dominance. They are expanding.
| Category | Details |
|---|---|
| Industry | Artificial Intelligence |
| Key Player | OpenAI |
| Major Rivals | Anthropic, Google DeepMind, Meta, xAI |
| Core Technology | Large Language Models (LLMs) |
| Key Trend | Increasing model size and compute power |
| Estimated AI Spending | ~$600+ billion (Big Tech, 2026 projection) |
| Market Structure | Oligopoly (few dominant players) |
| Business Shift | Moving from models to applications |
| Key Challenge | High cost vs unclear revenue |
| Reference | https://www.reuters.com |
For many years, it was assumed that larger models would yield better results because they were trained on more data using more computing power. Doubts followed. Expenses skyrocketed. Gains appeared to be gradual. Scaling alone might not be sufficient, even according to OpenAI’s own leadership.
However, it is evident that the industry hasn’t given up on the concept when you walk through the enormous, windowless data centers that are currently being constructed. If anything, it’s intensifying.
Businesses like Anthropic and Google DeepMind are making significant investments in both expanding and improving models. larger training runs. additional parameters. longer windows of context. On the surface, the reasoning is simple: scale becomes a differentiator once more if capabilities are converging. Beneath that reasoning, however, is a more subdued worry: the fear of falling behind in a field where advancement can seem abrupt and harsh.
It’s hard to ignore the numbers. Tech companies may spend more than $600 billion on AI infrastructure in 2026 alone, according to analysts. Expectations are necessary for that kind of capital to flow. Beneath it, though, is a troubling question: where will the returns originate? Yes, revenue is increasing, but not always at a rate that warrants the expenditures. As I watch this develop, it seems more like a calculated risk than a solid business plan.
Competition itself provides part of the solution. When one business moves forward, others follow—not always because it makes sense right away, but rather because the alternative seems riskier. Uncertainty frequently comes up in discussions with venture capitalists. Nobody knows for sure whether we’ve reached the ceiling or what it looks like. Behavior is motivated by this uncertainty.
Whether this is inertia or innovation is still up for debate. Additionally, there is a strategic layer that is initially more difficult to discern. Businesses are searching for ways to differentiate themselves as AI models become increasingly similar in their capabilities—a sort of commoditization creeping in.
More sophisticated applications and specialized tools can be supported by larger models. They produce choice. Furthermore, having the best base model is still important in a market where applications might ultimately drive profits. However, that leads to tension of its own.
Some of these businesses are now developing their own applications, such as financial software, legal assistants, and coding tools, sometimes going up against the developers who use their models. In the tech industry, this pattern is well-known and reminiscent of platform companies’ past actions. However, there’s a sharper quality to it. There is a deeper level of dependency.
It’s difficult to ignore the similarities as you watch this develop. Scale frequently preceded profits in earlier tech cycles, from cloud computing to mobile platforms. Before it paid off, Amazon invested years in infrastructure development. Before demonstrating the viability of electric vehicles, Tesla spent a lot of money.
There is a belief—possibly unspoken—that AI will take a similar course. However, belief does not imply certainty. And the expenses are genuine.
Nowadays, it takes a lot of energy, specialized chips, and time to train a frontier model. Even well-funded businesses are under pressure. Due to their inability to compete at that level, smaller players are essentially shut out. The market becomes more concentrated. The direction of technology is being shaped by a small number of powerful companies.
However, some researchers contend that improved methods, such as more effective training, new architectures, and more intelligent data usage, may lead to breakthroughs rather than larger models. That notion lurks in the background, subtly questioning the notion that scale is crucial. However, scale continues to be the most obvious tactic for the time being.
It’s hard to get rid of the feeling that something massive is being constructed, both physically and financially, when you stand outside those data centers—the ones without windows, full of rows of machines processing language at unfathomable speeds. It remains to be seen if everything holds together.
The race goes on for the time being. Furthermore, nobody appears prepared to slow down.
