In 2026, OpenAI Will Start Using AMD Chips and Could Make Its Own AI Hardware

OpenAI is reportedly teaming up with Broadcom to create custom silicon that can handle the massive artificial intelligence (AI) workloads required for inference tasks. According to sources cited by Reuters, the company has also secured manufacturing capacity with Taiwan Semiconductor Manufacturing Company (TSMC). OpenAI is assembling a team of around 20 specialists for this project, including key engineers who previously worked on Google’s Tensor chips designed for AI applications.

Despite these developments, the custom hardware is still in its early stages, with production not expected to begin until 2026. This timeline suggests that while OpenAI is making strides in designing its own chips, it will take some time before the company can fully deploy these technologies to support its growing AI operations.

In the meantime, OpenAI is leveraging chips from AMD, incorporating them into its Microsoft Azure infrastructure. AMD’s MI300 chips, launched last year, have made significant waves in the tech world. They have played a key role in AMD’s data center business, which has doubled in size within a single year, largely due to competition with Nvidia, the dominant player in the AI chip market.

Earlier this year, there were reports that OpenAI was exploring various options for developing its own AI chips. In July, The Information reported that OpenAI was in talks with Broadcom and other semiconductor designers to pursue this goal. Additionally, Bloomberg had revealed that OpenAI was considering building its own network of foundries to produce these chips. However, according to Reuters, those ambitious plans have been paused, largely due to the high costs and the long timelines required to achieve such a significant feat.

OpenAI’s decision to develop custom hardware places it in line with other tech giants that are working to reduce costs and maintain access to high-performance AI hardware. Companies like Google, Microsoft, and Amazon have been producing their own custom chips for several generations, which has given them a considerable head start. Google, for example, has been producing its Tensor Processing Units (TPUs) for AI workloads, while Microsoft has been working on its own hardware through partnerships with other chip manufacturers. Amazon, too, has been heavily investing in custom chip designs for its AWS platform.

Given the scale and complexity of designing and manufacturing custom AI chips, OpenAI faces significant challenges in becoming a major player in this space. The company may need a substantial increase in funding and resources to compete with these tech giants, who already have established supply chains and extensive experience in chip development.

While the custom silicon project is still a few years away from coming to fruition, OpenAI’s collaboration with Broadcom and its efforts to build a chip development team signal its intent to take control over its AI infrastructure. Custom-designed chips could potentially provide OpenAI with more efficient and cost-effective hardware, tailored specifically to the demands of its AI models like GPT-4 and future iterations. This could help the company reduce its reliance on external hardware providers and better manage the costs associated with large-scale AI training and inference tasks.

However, entering the custom silicon market is not without risks. Designing and manufacturing chips is an incredibly expensive and time-consuming process, especially for a company like OpenAI, which is relatively new to the hardware space. Moreover, the competition is fierce, with established players like Nvidia dominating the market for AI-specific hardware. Nvidia’s GPUs are currently the industry standard for AI workloads, and it has a substantial lead in terms of both performance and market share.

OpenAI’s reliance on AMD’s MI300 chips in the short term indicates that the company is hedging its bets while it works on its own hardware solutions. AMD has been making significant strides in the AI space, with its MI300 chips offering a competitive alternative to Nvidia’s products. By incorporating AMD chips into its Azure setup, OpenAI can continue to scale its AI operations without being entirely dependent on Nvidia, while also buying time to develop its own custom silicon.

In summary, OpenAI’s collaboration with Broadcom and its efforts to design custom chips represent a strategic move to take greater control over its AI infrastructure. However, the road ahead is long and filled with challenges, particularly given the dominance of Nvidia and the significant head start of other tech giants in the custom chip space. For now, OpenAI’s reliance on AMD chips allows it to continue growing its AI capabilities while it works toward its longer-term goal of producing its own hardware.

Latest articles