Shillong, November 16: In a significant move intensifying the AI race, Microsoft revealed two proprietary chips and integrated systems designed in-house to facilitate the training of large language models.
As per IANS, the Microsoft Azure Maia AI Accelerator, optimized for AI tasks and generative AI, and the Microsoft Azure Cobalt CPU, an Arm-based processor, tailored for general-purpose compute workloads, are set to be deployed in Microsoft’s data centers early next year.
During the ‘Microsoft Ignite’ event, the company emphasized its commitment to building infrastructure that supports AI innovation. Scott Guthrie, Executive Vice President of Microsoft’s Cloud + AI Group, stated, “Microsoft is building the infrastructure to support AI innovation, and we are reimagining every aspect of our data centers to meet the needs of our customers.”
The incorporation of these homegrown chips aligns with Microsoft’s vision of tailoring every element for its cloud and AI workloads. Rani Borkar, Corporate Vice President for Azure Hardware Systems and Infrastructure, highlighted the end goal of creating an Azure hardware system with maximum flexibility and optimization for power, performance, sustainability, or cost.
At the same event, Microsoft announced the general availability of Azure Boost, a system enhancing storage and networking speed by leveraging purpose-built hardware and software. The company’s focus on co-designing and optimizing hardware and software together underscores its holistic approach.
Microsoft’s expansion of industry partnerships aims to provide customers with more infrastructure options. By combining first-party silicon with offerings from industry partners, Microsoft seeks to offer a broader range of choices in terms of price and performance for its customers.
OpenAI’s collaboration with Microsoft, particularly in providing feedback on Azure Maia, highlights the synergy between the two entities. Sam Altman, CEO of OpenAI, acknowledged the collaborative effort, stating, “Azure’s end-to-end AI architecture, now optimized down to the silicon with Maia, paves the way for training more capable models and making those models cheaper for our customers.”