Nvidia Strengthens AI Lead with Processor Upgrade!
Nvidia Corp., the leading chipmaker globally, is enhancing its H100 artificial intelligence processor with additional features. This update further strengthens the product’s role in driving Nvidia’s supremacy in the AI computing market.
The new model, called H200, will be able to use high-bandwidth memory, or HBM3e, making it better able to cope with the large data sets needed to develop and implement artificial intelligence, Nvidia said on Monday. Amazon.com Inc.’s AWS, Alphabet Inc.’s Google Cloud and Oracle Corp.’s Cloud Infrastructure have all committed to using the new chip starting next year.
The current version of the Nvidia processor, known as the AI Accelerator, is already notoriously in demand. It’s a prized commodity among tech heavyweights like Larry Ellison and Elon Musk, who pride themselves on their ability to get their hands on a chip. But the product faces more competition: Advanced Micro Devices Inc. introduced a rival MI300 chip last quarter, and Intel Corp. claims its Gaudi 2 model is faster than the H100.
With the new product, Nvidia is trying to keep up with the size of data sets used to create AI models and services, it said. The addition of improved memory capacity makes the H200 much faster at bombarding software with data—a process that trains AI to perform tasks such as image and speech recognition.
“When you look at what’s happening in the market, model sizes are growing rapidly,” said Dion Harris, who oversees Nvidia’s data center products. “It’s another example of us continuing to quickly adopt the latest and greatest technology.”
Major computer manufacturers and cloud service providers are expected to start using the H200 in the second quarter of 2024.
Nvidia started making graphics cards for gamers, but its powerful processors have now won a following among data center operators. This division has changed from a side business to the company’s biggest moneymaker in less than five years.
Nvidia’s graphics chips helped pioneer an approach called parallel computing, in which a large number of relatively simple calculations are processed simultaneously. That allows it to win large orders from data center companies at the expense of traditional processors supplied by Intel.
The growth helped make Nvidia the poster child for AI computing earlier this year — and saw its market capitalization skyrocket. The Santa Clara, Calif.-based company became the first trillion-dollar chip maker, overtaking the likes of Intel.
Still, it has faced challenges this year, including opposition to sales of artificial intelligence accelerators to China. The Biden administration has sought to limit the flow of advanced technology to that country, hurting Nvidia’s sales in the world’s largest chip market.
Regulations prevented the H100 and other processors from entering China, but Nvidia has been developing new AI chips for the market, according to a report in local media last week.
Nvidia will give investors a clearer picture of the situation next week. It is scheduled to report earnings on November 21.