Nvidia Improves for AI, Adds New, Faster Features for Flagship Chip

More high-bandwidth memory for Nvidia's AI chips.

As per a Reuters report, Nvidia will release "the world's leading AI computing platform" chip called the H200, which will reportedly support more high-bandwidth memory, allowing it to process more data quickly compared to its H100 artificial intelligence chip. The top-of-the-line AI chip will reportedly start to roll out next year with Amazon, Alphabet, Google, and Oracle.

As per Nvidia's press release after the company's SC23 Special Address, the chip was described by Ian Buck, vice president of Nvidia's high-performance computing and hyperscale data center business, as "the world's leading AI computing platform."

Nvidia's Upcoming Super Chip Can Work on 'Most Complex' Generative AI Tasks
A sign is posted in front of the Nvidia headquarters on May 10, 2018 in Santa Clara, California. Nvidia Corporation will report first quarter earnings today after the closing bell. Justin Sullivan/Getty Images

Reuters reports that H200's high-bandwidth memory will support 141 gigabytes, significantly faster than its H100 predecessor with only 80 gigabytes. The increased high-bandwidth memory and a quicker link to the chip's processing components will enable these services to respond more rapidly.

CNBC reports that, according to Nvidia, the H200 will produce data almost twice as quickly as the H100. Based on an experiment with Meta's Second Large Language Model.

Specifically, as per the press release, the newest chip's graphics processing unit (GPU), or the chip's capability to handle multiple data at the same time, shows that the H200's Tensor Core GPU provides up to 18x performance increase over prior-generation accelerators, as per its performance on running models like GPT-3.

Nvidia's AI Chip Domination

Buck lauded the speed and future implications of the faster AI chip by stating that "accelerated computing is sustainable computing," he adds that "by combining the power of generative AI with accelerated computing, we can reduce our environmental impact and drive innovation across industries."

Nvidia's current H100 processor dominates the market, previously utilized by OpenAI to train GPT-4, its most sophisticated big language model. Governmental organizations, large corporations, and start-ups are all fighting for a small quantity of the chips.

Nvidia's chips also dominate other aspects, such as medical technology, as a recent Argonne National Laboratory research using Nvidia GPUs and 1.5 million COVID genomic sequences allowed researchers to quickly detect and identify new virus variants.

Newest AI Chips' Anticipated Release

Along with specialized AI cloud providers CoreWeave, Lambda, and Vultr, Nvidia said in the recently published Reuters report that Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure will be among the first cloud service providers to enable access to H200 processors.

Anticipated for release in the second quarter of 2024, the H200 will rival AMD's MI300X GPU, which shares similarities with the H200 but has more memory than its predecessors to accommodate larger models on the hardware for inference.

The newest chips, as per CNBC. is anticipated for release in the second quarter of 2024, the H200 will rival AMD's MI300X GPU, which shares similarities with the H200 but has more memory than its predecessors to accommodate larger models on the hardware for inference.

ChatGPT Privacy Guide: Here Are Some Tips to Protect Your Data in OpenAI's Chatbot
Here are some tricks that you can do to have more privacy when using OpenAI's ChatGPT. Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics