OpenAI announced a big partnership with Broadcom, to design and build its own custom AI chips for data centers. This move helps cut down on depending so much on Nvidia, letting OpenAI scale up faster and cheaper for stuff like ChatGPT and future projects. They’re aiming for 10 gigawatts of these accelerators, starting rollout in mid-2026 and wrapping by end of 2029. CEO Sam Altman called it a key step to build the backbone for AI that really helps people and businesses.
Cutting Costs and Boosting Speed
Broadcom’s handling the chip design, while OpenAI brings the AI know-how to make them perfect for training huge models. This custom setup could slash power use and costs compared to off-the-shelf Nvidia gear, especially with OpenAI’s massive needs – they’re already eyeing 5 gigawatts from other suppliers. It’s part of a wave where big AI players like Google and Amazon are ditching one-size-fits-all chips for tailored ones.
Who Benefits and When?
This ramps up OpenAI’s infrastructure for everyone using their tools, from free users to enterprise folks. First chips hit data centers H2 2026, with full 10GW by 2029 – no direct consumer access, but it’ll make AI faster and more reliable worldwide.
Shaking Up the Chip World
With Nvidia holding 80% of the AI chip market, this deal gives OpenAI more control and could spark more custom plays from others. It’s a smart hedge against shortages, pushing AI forward without the bottlenecks – one chip at a time.