NUEVA YORK (EFE).— Microsoft unveiled its Maia 200, the second generation of its artificial intelligence (AI) chip, with which it seeks to reduce dependence on Nvidia and compete with Google and Amazon cloud developers.
The new model was introduced two years after its first version, the Maia 100, was not supposed to be available to customers.
“This is the most powerful inference system Microsoft has deployed,” Scott Guthrie, Microsoft’s executive vice president of cloud and AI, said in a blog post.
According to the company, the Maia 200 is designed as an inference processor and promises to be 30% more efficient than actual Microsoft hardware. In addition, you deploy Azure data centers in the United States to enable services such as Microsoft 365 Copilot, Foundry, and the latest GPT models from OpenAI.
Similarly, these chips are integrated into data centers in the central region of the United States and will subsequently be connected to the West in other locations. The chips use Taiwan Semiconductor Manufacturing Co.’s three-nanometer process.
The announcement of this technology stands out in full force and leads generative AI. Great cloud testers try to develop their own microcircuit that is not solely dependent on Nvidia, which is the market leader.
The launch of Amazon, Google and Microsoft contributes to increasing market competence in fashion.
Related

Leave a Reply