The DGX GH200 architecture enables hundreds of powerful chips to act as a single GPU.News 

NVIDIA’s DGX Supercomputer is fully focused on Generative AI

Jensen Hiang, CEO of NVIDIA, made a number of announcements during his keynote address at Computex, including details on the company’s upcoming DGX supercomputer. Given the clear direction of the industry, it should come as no surprise that the DGX GH200 is largely intended to help companies develop generative AI models.

The supercomputer uses the new NVLink Switch system to enable 256 GH200 Grace Hopper superchips to act as a single GPU (each chip contains an ARM-based Grace CPU and H100 Tensor Core GPU). According to NVIDIA, this enables the DGX GH200 to achieve 1 exaflop of performance and 144 TB of shared memory. The company says that’s 500 times more memory than you’d find in a single DGX A100 system.

By comparison, in the most recent list of the top 500 supercomputers, the Frontier at Oak Ridge National Laboratory in Tennessee is the only known exascale system to achieve nearly 1.2 exaflops of performance on the Lynmark scale. This is more than twice the top performance of the second-placed Fugaku system in Japan.

In fact, NVIDIA claims to have developed a supercomputer that can stand toe-to-toe with the most powerful known system on the planet (Meta is building what it claims will be the world’s fastest AI supercomputer when fully built). NVIDIA says the DGX GH200 architecture offers 10 times more bandwidth than the previous generation, “delivering the power of an AI supercomputer with the simplicity of programming a single GPU.”

Some big names are interested in the DGX GH200. Google Cloud, Meta and Microsoft should be among the first companies to access the supercomputer to test how it handles AI workloads. NVIDIA says the DGX GH200 supercomputer will be available by the end of 2023.

The company is also building its own supercomputer, Helios, which combines four DGX GH200 systems. NVIDIA expects Helios to be online by the end of the year.

During his keynote, Huang discussed other generative AI developments, including on the gaming front. The NVIDIA Avatar Cloud Engine (ACE) for games is a service that developers can leverage to create custom AI models for speech, conversation, and animation. NVIDIA says its ACE game software can “give NPCs conversational skills so they can answer questions with real characters that evolve.”

Related posts

Leave a Comment