Meta is building its first-generation custom silicon chip for running artificial intelligence (AI) models, saying its AI compute needs will grow dramatically over the next decade as we break new ground in AI research.News 

Meta works on artificial intelligence-powered hardware, artificial intelligence chips and next-generation graphics cards

Meta (formerly Facebook) is building its first generation of custom silicon to run AI models and says its AI computing needs will grow dramatically over the next decade as we blaze new trails in AI research.

Called MTIA (Meta Training and Inference Accelerator), an in-house custom accelerator chip provides greater computing power and efficiency than CPUs and is tailored for internal workloads.

“By deploying both MTIA chips and GPUs, we can deliver better performance, lower latency and greater efficiency for every workload,” said Santosh Janardhan, Meta’s president and director of infrastructure.

The company is also planning a new AI-optimized data center design and the second phase of its 16,000 GPU supercomputer for AI research.

“These efforts – and additional projects still underway – will allow us to develop larger, more sophisticated AI models and then deploy them effectively at scale,” Janardhan added.

The next-generation data center is an AI-optimized design that supports liquid-cooled AI hardware and a powerful AI network that connects thousands of AI chips together for data center-scale AI training clusters.

“It’s also faster and more cost-effective to build, and complements other new hardware, such as our first in-house ASIC solution, the MSVP (Meta Scalable Video Processor), designed to power the ever-growing video workloads at Meta,” Janardhan said.

Metan’s Research SuperCluster (RSC) AI supercomputer, which the company believes is one of the fastest AI supercomputers in the world, was built to train the next generation of large AI models to use new augmented reality tools, content understanding systems and real-time translation. technology and more.

It has 16,000 GPUs, all accessible via a 3-tier Clos network fabric that provides full bandwidth to each of the 2,000 training systems.

“By designing much of our infrastructure, we can optimize the end-to-end experience from the physical layer to the virtual layer to the software layer to the actual user experience,” Meta said.

Read all the Latest Tech News here.

Related posts

Leave a Comment