Meta CEO Mark Zuckerberg announced on Thursday the amalgamation of the company’s AI research team with its business-focused generative AI team, underscoring Meta’s commitment to integrating AI technologies into its products. Zuckerberg highlighted the expansion of Meta’s infrastructure to accommodate this initiative, revealing plans to deploy approximately 350,000 H100 GPUs from Nvidia by year-end.
With contributions from other suppliers, Meta anticipates a total of around 600,000 GPUs, positioning its system as one of the largest in the tech industry upon completion.
Comparatively, Amazon disclosed plans to build a system with 100,000 Trainium2 chips last autumn, while Oracle introduced a system with 32,000 Nvidia H100 GPUs. Although Meta declined to specify GPU suppliers beyond Nvidia, it has publicly expressed intentions to incorporate chips from AMD, and there are reports of an internally designed GPU-like chip in development.
Meta’s recent emphasis on generative AI follows years of pioneering research by its FAIR team, with a renewed focus on integrating AI into core social media products and AR/VR hardware devices. The formation of the “GenAI” team last year marked a strategic move to accelerate this integration, responding to the success of OpenAI’s ChatGPT chatbot in late 2022.
In the wake of these efforts, Meta has launched a commercial version of its Llama large language model, ad tools capable of generating image backgrounds from text prompts, and a “Meta AI” chatbot accessible through Ray-Ban smart glasses. Zuckerberg, in Thursday’s posts, disclosed ongoing training for a third version of the Llama model, linking these AI advancements to Meta’s broader vision of the AR/VR-driven metaverse. He asserted that new devices, such as glasses, would be essential for interacting with AI in this evolving metaverse landscape.