Meta announced that it is expanding its custom silicon program with four new generations of Meta Training and Inference Accelerator (MTIA) chips to be developed and deployed within the next two years. The new chips will support ranking, recommendations, and generative AI (GenAI) workloads, marking a faster release cycle than typical industry standards. MTIA 300 is already in production for ranking and recommendations training, while MTIA 400, 450, and 500 will focus primarily on GenAI inference production through 2027.
The company’s AI infrastructure strategy centers on a portfolio approach that combines its own MTIA chips with silicon sourced from other industry leaders. Meta has deployed hundreds of thousands of MTIA chips for inference workloads across organic content and ads, achieving higher compute efficiency and cost-effectiveness compared to general-purpose chips. The modular design of MTIA allows new chips to integrate seamlessly into existing rack systems, reducing time-to-production.
Meta’s roadmap emphasizes rapid, iterative development, an inference-first design philosophy, and alignment with industry standards such as PyTorch, vLLM, Triton, and the Open Compute Project. This approach aims to sustain innovation speed and scalability as the company advances toward its goal of enabling personal superintelligence.