Meta tests first in-house AI training chip, aiming to reduce reliance on Nvidia and cut infrastructure costs.
In Short
Meta is testing its first in-house chip for AI training to reduce reliance on external suppliers like Nvidia. The chip, aimed at improving efficiency, is part of a plan to cut costs in AI development as the company targets its use by 2026.
Meta is testing its first in-house chip designed for training artificial intelligence (AI) systems. This marks a significant step in Meta’s goal to reduce its dependence on external suppliers, such as Nvidia.
The company aims to lower its infrastructure costs amid substantial investments in AI technology. Forecasted expenses for 2025 are projected between $114 billion and $119 billion, driven by AI development.
The new chip, a dedicated accelerator, is designed specifically for AI tasks and may offer better power efficiency than typical graphics processing units. Meta is collaborating with Taiwan’s TSMC for its production.
Critical stage
The testing follows the initial “tape-out” of the chip, a crucial stage in chip development which involves sending a design to a factory. This process can be costly and time-consuming, with no certainty of success, as failures may require repeating steps.
The chip is a part of Meta’s Meta Training and Inference Accelerator series, which has had a rocky start over the years. However, Meta began using an MTIA chip last year for AI inference in its recommendation systems on Facebook and Instagram.
Meta plans to implement its own chips for training purposes by 2026, focusing initially on recommendation systems before expanding to generative AI applications.