Web Analytics
Bangla
Loading date...
RECENT THREADS SOCIAL PAGE LOGIN

Meta announced that it is expanding its custom silicon program with four new generations of Meta Training and Inference Accelerator (MTIA) chips to be developed and deployed within the next two years. The new chips will support ranking, recommendations, and generative AI (GenAI) workloads, marking a faster release cycle than typical industry standards. MTIA 300 is already in production for ranking and recommendations training, while MTIA 400, 450, and 500 will focus primarily on GenAI inference production through 2027.

The company’s AI infrastructure strategy centers on a portfolio approach that combines its own MTIA chips with silicon sourced from other industry leaders. Meta has deployed hundreds of thousands of MTIA chips for inference workloads across organic content and ads, achieving higher compute efficiency and cost-effectiveness compared to general-purpose chips. The modular design of MTIA allows new chips to integrate seamlessly into existing rack systems, reducing time-to-production.

Meta’s roadmap emphasizes rapid, iterative development, an inference-first design philosophy, and alignment with industry standards such as PyTorch, vLLM, Triton, and the Open Compute Project. This approach aims to sustain innovation speed and scalability as the company advances toward its goal of enabling personal superintelligence.

Card image

News Source

fb.com 12 Mar 26

Expanding Meta’s Custom Silicon to Power Our AI Workloads

In 2023, we developed the Meta Training and Inference Accelerator (MTIA), a family of custom-built silicon chips to power our AI workloads efficiently. Now, we’re developing and deploying four new generations of chips within the next two years — a mu


The ‘1 Nojor’ media platform is now live in beta, inviting users to explore and provide feedback as we continue to refine the experience.