Be part of our every day and weekly newsletters for the newest updates and unique content material on industry-leading AI protection. Study Extra
Cerebras Programs, an AI {hardware} startup that has been steadily difficult Nvidia’s dominance within the synthetic intelligence market, introduced Tuesday a major growth of its information heart footprint and two main enterprise partnerships that place the corporate to develop into the main supplier of high-speed AI inference companies.
The corporate will add six new AI information facilities throughout North America and Europe, growing its inference capability twentyfold to over 40 million tokens per second. The growth consists of services in Dallas, Minneapolis, Oklahoma Metropolis, Montreal, New York, and France, with 85% of the overall capability situated in america.
“This year, our goal is to truly satisfy all the demand and all the new demand we expect will come online as a result of new models like Llama 4 and new DeepSeek models,” stated James Wang, Director of Product Advertising and marketing at Cerebras, in an interview with VentureBeat. “This is our huge growth initiative this year to satisfy almost unlimited demand we’re seeing across the board for inference tokens.”
The information heart growth represents the corporate’s bold wager that the marketplace for high-speed AI inference — the method the place educated AI fashions generate outputs for real-world purposes — will develop dramatically as firms search quicker options to GPU-based options from Nvidia.
Strategic partnerships that deliver high-speed AI to builders and monetary analysts
Alongside the infrastructure growth, Cerebras introduced partnerships with Hugging Face, the favored AI developer platform, and AlphaSense, a market intelligence platform broadly used within the monetary companies {industry}.
The Hugging Face integration will permit its 5 million builders to entry Cerebras Inference with a single click on, with out having to enroll in Cerebras individually. This represents a serious distribution channel for Cerebras, notably for builders working with open-source fashions like Llama 3.3 70B.
“Hugging Face is kind of the GitHub of AI and the center of all open source AI development,” Wang defined. “The integration is super nice and native. You just appear in their inference providers list. You just check the box and then you can use Cerebras right away.”
The AlphaSense partnership represents a major enterprise buyer win, with the monetary intelligence platform switching from what Wang described as a “global, top three closed-source AI model vendor” to Cerebras. The corporate, which serves roughly 85% of Fortune 100 firms, is utilizing Cerebras to speed up its AI-powered search capabilities for market intelligence.
“This is a tremendous customer win and a very large contract for us,” Wang stated. “We speed them up by 10x so what used to take five seconds or longer, basically become instant on Cerebras.”

How Cerebras is successful the race for AI inference velocity as reasoning fashions decelerate
Cerebras has been positioning itself as a specialist in high-speed inference, claiming its Wafer-Scale Engine (WSE-3) processor can run AI fashions 10 to 70 instances quicker than GPU-based options. This velocity benefit has develop into more and more worthwhile as AI fashions evolve towards extra complicated reasoning capabilities.
“If you listen to Jensen’s remarks, reasoning is the next big thing, even according to Nvidia,” Wang stated, referring to Nvidia CEO Jensen Huang. “But what he’s not telling you is that reasoning makes the whole thing run 10 times slower because the model has to think and generate a bunch of internal monologue before it gives you the final answer.”
This slowdown creates a possibility for Cerebras, whose specialised {hardware} is designed to speed up these extra complicated AI workloads. The corporate has already secured high-profile clients together with Perplexity AI and Mistral AI, who use Cerebras to energy their AI search and assistant merchandise, respectively.
“We help Perplexity become the world’s fastest AI search engine. This just isn’t possible otherwise,” Wang stated. “We help Mistral achieve the same feat. Now they have a reason for people to subscribe to Le Chat Pro, whereas before, your model is probably not the same cutting-edge level as GPT-4.”

The compelling economics behind Cerebras’ problem to OpenAI and Nvidia
Cerebras is betting that the mix of velocity and price will make its inference companies engaging even to firms already utilizing main fashions like GPT-4.
Wang identified that Meta’s Llama 3.3 70B, an open-source mannequin that Cerebras has optimized for its {hardware}, now scores the identical on intelligence assessments as OpenAI’s GPT-4, whereas costing considerably much less to run.
“Anyone who is using GPT-4 today can just move to Llama 3.3 70B as a drop-in replacement,” he defined. “The price for GPT-4 is [about] $4.40 in blended terms. And Llama 3.3 is like 60 cents. We’re about 60 cents, right? So you reduce cost by almost an order of magnitude. And if you use Cerebras, you increase speed by another order of magnitude.”
Inside Cerebras’ tornado-proof information facilities constructed for AI resilience
The corporate is making substantial investments in resilient infrastructure as a part of its growth. Its Oklahoma Metropolis facility, scheduled to return on-line in June 2025, is designed to resist excessive climate occasions.
“Oklahoma, as you know, is a kind of a tornado zone. So this data center actually is rated and designed to be fully resistant to tornadoes and seismic activity,” Wang stated. “It will withstand the strongest tornado ever recorded on record. If that thing just goes through, this thing will just keep sending Llama tokens to developers.”
The Oklahoma Metropolis facility, operated in partnership with Scale Datacenter, will home over 300 Cerebras CS-3 programs and options triple redundant energy stations and customized water-cooling options particularly designed for Cerebras’ wafer-scale programs.

From skepticism to market management: How Cerebras is proving its worth
The growth and partnerships introduced right this moment signify a major milestone for Cerebras, which has been working to show itself in an AI {hardware} market dominated by Nvidia.
“I think what was reasonable skepticism about customer uptake, maybe when we first launched, I think that is now fully put to bed, just given the diversity of logos we have,” Wang stated.
The corporate is focusing on three particular areas the place quick inference supplies essentially the most worth: real-time voice and video processing, reasoning fashions, and coding purposes.
“Coding is one of these kind of in-between reasoning and regular Q&A that takes maybe 30 seconds to a minute to generate all the code,” Wang defined. “Speed directly is proportional to developer productivity. So having speed there matters.”
By specializing in high-speed inference slightly than competing throughout all AI workloads, Cerebras has discovered a distinct segment the place it will probably declare management over even the biggest cloud suppliers.
“Nobody generally competes against AWS and Azure on their scale. We don’t obviously reach full scale like them, but to be able to replicate a key segment… on the high-speed inference front, we will have more capacity than them,” Wang stated.
Why Cerebras’ US-centric growth issues for AI sovereignty and future workloads
The growth comes at a time when the AI {industry} is more and more centered on inference capabilities, as firms transfer from experimenting with generative AI to deploying it in manufacturing purposes the place velocity and cost-efficiency are essential.
With 85% of its inference capability situated in america, Cerebras can be positioning itself as a key participant in advancing home AI infrastructure at a time when technological sovereignty has develop into a nationwide precedence.
“Cerebras is turbocharging the future of U.S. AI leadership with unmatched performance, scale and efficiency – these new global datacenters will serve as the backbone for the next wave of AI innovation,” stated Dhiraj Mallick, COO of Cerebras Programs, within the firm’s announcement.
As reasoning fashions like DeepSeek R1 and OpenAI’s o3 develop into extra prevalent, the demand for quicker inference options is prone to develop. These fashions, which may take minutes to generate solutions on conventional {hardware}, function near-instantaneously on Cerebras programs, based on the corporate.
For technical choice makers evaluating AI infrastructure choices, Cerebras’ growth represents a major new different to GPU-based options, notably for purposes the place response time is essential to consumer expertise.
Whether or not the corporate can actually problem Nvidia’s dominance within the broader AI {hardware} market stays to be seen, however its concentrate on high-speed inference and substantial infrastructure funding demonstrates a transparent technique to carve out a worthwhile section of the quickly evolving AI panorama.