Meta Platforms has disclosed details about its latest in-house artificial intelligence accelerator chip, marking a significant step in the company’s efforts to bolster its infrastructure for AI-driven products across its platforms such as Facebook, Instagram, and WhatsApp.
The unveiling of the new chip, internally known as “Artemis,” underscores Meta’s strategy to reduce its reliance on third-party AI chips, particularly those from Nvidia, while simultaneously enhancing efficiency and reducing energy costs.
The chip, officially named the Meta Training and Inference Accelerator, is designed to strike a balance between compute power, memory bandwidth, and capacity to effectively handle ranking and recommendation models, according to a blog post by the company.
Meta’s custom silicon efforts extend beyond chip development, encompassing broader hardware systems as well as software optimization to harness the full potential of its infrastructure.
Flagship chips
In addition to its in-house chip development, Meta continues to invest in external AI chip suppliers. Earlier this year, CEO Mark Zuckerberg announced plans to acquire approximately 350,000 flagship H100 chips from Nvidia, alongside investments in other suppliers, to amass the equivalent of 600,000 H100 chips in total for the year.
The new MTIA chip, manufactured by Taiwan Semiconductor Manufacturing Co on its advanced “5nm” process, boasts three times the performance of its predecessor.
Already deployed in data centers, the MTIA chip is actively involved in serving AI applications, with ongoing efforts to expand its capabilities to support generative AI workloads.
Meta’s latest move underscores its commitment to advancing AI technologies to power its diverse range of products and services, signaling an era of enhanced efficiency and innovation within the Meta ecosystem.
Ahron Young is an award winning journalist who has covered major news events around the world. Ahron is the Managing Editor and Founder of TICKER NEWS.
As businesses embrace cutting-edge tech, challenges like data sovereignty and AI are taking centre stage.
Over the past six months, the AI industry has seen significant advancements, with competing models such as Meta’s Luma and Google’s Gemini entering the market.
However, these developments come with a reality check. Building large language models (LLMs) requires substantial computing power and time, making immediate returns on investment unlikely.
One promising innovation is agentic AI, a step beyond generative AI, which enables proactive, automated solutions.
For instance, this technology could stabilise IT systems autonomously, diagnosing and resolving issues without human intervention.
Data sovereignty has also emerged as a key focus, with increasing emphasis on keeping data within national borders to comply with local laws. This has driven the adoption of sovereign clouds and private data centres, ensuring secure and localised data processing for AI development.
Deepak Ajmani, Vice President of ANZ & APAC Emerging Markets at Confluent, joins to discuss the evolving business landscape.
Key lessons and tips for seamless Copilot adoption
In this episode, Kate Faarland, the Senior Vice President of Data and AI Programs at AvePoint, discusses the importance of AvePoint’s data and AI program, internal challenges with implementing CoPilot, and the organisation’s learnings from rolling out CoPilot for their workforce.