- Anthropic is working with FluidStack to build data centers in at least two U.S. states
- The move echoes OpenAI's efforts to build its own AI compute infrastructure
- Analysts said the model makers are hedging their bets against cloud hyeprscalers and looking for more control over supply and costs
It seems OpenAI isn’t the only AI model creator making big investments in compute infrastructure. Rival Anthropic just revealed plans to spend $50 billion on the construction of data centers in the U.S. It is partnering with FluidStack on the project. But what does this trend mean for hyperscalers?
The first of Anthropic’s facilities will be located in Texas (which is also home to one of OpenAI’s Stargate campuses) and New York, “with more sites to come.” Its data centers are expected to come online throughout 2026.
At this point, it’s not entirely clear how large the facilities will be in terms of capacity and whether they’ll be to the massive gigawatt scale OpenAI is striving for. Their size will likely depend on how much power Anthropic is able to secure in each of the markets it builds in.
The news comes after Anthropic in late October inked a compute contract with Google Cloud worth “tens of billions of dollars” to use up to one million of the cloud giant’s TPUs.
Shortly after the Google announcement, AWS announced the activation of nearly half a million Trainium2 chips for Anthropic as part of an $11 billion collaboration called Project Rainier. The project is set to scale to more than 1 million chips by the end of this year.
New data center and AI dawn
Between OpenAI’s hundreds of billions of dollars in compute commitments and Anthropic’s multi-billion dollar push, a picture is emerging of AI model leaders positioning themselves as compute titans in their own right.
The open question is whether investments like Stargate and Anthropic’s project with FluidStack will be used solely for internal compute needs.
J. Gold Associates Founder and Principal Analyst Jack Gold told Fierce there are a few factors motivating Anthropic and OpenAI’s data center moves.
First and foremost, he said, it’s about control – control over costs, control over compute availability and control over customization. Essentially, the AI model makers are hedging their bets against hyperscalers.
Gold added they’re also ensuring their model development work is confidential and have more insight into where their data is stored and processed (hello, data sovereignty!). As an added bonus, these companies get to score political points for having made substantial investments in U.S. technology – something that seems to be a priority for President Donald Trump’s administration.
Sid Nag, president and chief research officer at Tekonyx, agreed that the trend is largely about control.
“With frontier AI models, the compute and power demands are enormous — and generic cloud services such as the hyperscalers may become cost-prohibitive or sub-optimized for their specific workloads,” he told Fierce. “They will be targeting efficiencies (both compute costs and energy costs) that give them a cost/performance edge.”
Interestingly, Nag also noted that building their own infrastructure will allow companies like Anthropic and OpenAI to ensure they can serve their enterprise customers despite widespread compute shortages and rationing.
“Anthropic has many startups that are their customers. They want to serve these customer base without leaving their destiny to the hyperscalers,” Nag explained.
Of course, the cost of building their own data centers mean Anthropic and OpenAI are taking an upfront risk, Nag said. The trade off? The possibility of “lower marginal cost per model/inference over time.”
As for what it all means for hyperscalers, Nag said public cloud providers could “will face margin pressure or disintermediation if large AI-native firms like Anthropic shift more compute in-house or to custom facilities.”