NVIDIA Clean Sweeps MLPerf AI Benchmarks With Hopper H100 GPU, Up To 4.5x Performance Uplift Over Ampere A100

NVIDIA’s Hopper H100 GPU has made its debut on the MLPerf AI Benchmark list and shattered all previous records achieved by Ampere A100. While Hopper Tensor Core GPUs pave the way for the next big AI revolution, the Ampere A100 GPUs continue to showcase leadership performance in the mainstream AI application suite while Jetson AGX Orin leads in edge computing.

NVIDIA’s AI Revolution Continues With Hopper H100 Tensor Core GPU Shattering All MLPerf Benchmarks, Delivering Up To 4.5x Performance Uplift Versus Last-Gen

Press Release: In their debut on the MLPerf industry-standard AI benchmarks, NVIDIA H100 Tensor Core GPUs set world records in inference on all workloads, delivering up to 4.5x more performance than previous-generation GPUs. The results demonstrate that Hopper is the premium choice for users who demand utmost performance on advanced AI models.

Offline scenario for data center and edge (Single GPU)

Additionally, NVIDIA A100 Tensor Core GPUs and the NVIDIA Jetson AGX Orin module for AI-powered robotics continued to deliver overall leadership inference performance across all MLPerf tests: image and speech recognition, natural language processing, and recommender systems.

The H100, aka Hopper, raised the bar in per-accelerator performance across all six neural networks in the round. It demonstrated leadership in both throughput and speed in a separate server and offline scenarios. The NVIDIA Hopper architecture delivered up to 4.5x more performance than NVIDIA Ampere architecture GPUs, which continue to provide overall leadership in MLPerf results.

Thanks in part to its Transformer Engine, Hopper excelled on the popular BERT model for natural language processing. It’s among the largest and most performance-hungry of the MLPerf AI models. These inference benchmarks mark the first public demonstration of H100 GPUs, which will be available later this year. The H100 GPUs will participate in future MLPerf rounds for training.

A100 GPUs Show Leadership

NVIDIA A100 GPUs, available today from major cloud service providers and systems manufacturers, continued to show overall leadership in mainstream performance on AI inference in the latest tests. A100 GPUs won more tests than any submission in data center and edge computing categories and scenarios. In June, the A100 also delivered overall leadership in MLPerf training benchmarks, demonstrating its abilities across the AI ​​workflow.

A featured image of the NVIDIA GA100 die.

Since their July 2020 debut on MLPerf, A100 GPUs have advanced their performance by 6x, thanks to continuous improvements in NVIDIA AI software. NVIDIA AI is the only platform to run all MLPerf inference workloads and scenarios in data center and edge computing.

Users Need Versatile Performance

The ability of NVIDIA GPUs to deliver leadership performance on all major AI models makes users the real winners. Their real-world applications typically employ many neural networks of different kinds.

For example, an AI application may need to understand a user’s spoken request, classify an image, make a recommendation, and then deliver a response as a spoken message in a human-sounding voice. Each step requires a different type of AI model.

The MLPerf benchmarks cover these and other popular AI workloads and scenarios – computer vision, natural language processing, recommendation systems, speech recognition, and more. The tests ensure users will get performance that’s dependable and flexible to deploy.

Users rely on MLPerf results to make informed buying decisions because the tests are transparent and objective. The benchmarks enjoy backing from a broad group that includes Amazon, Arm, Baidu, Google, Harvard, Intel, Meta, Microsoft, Stanford, and the University of Toronto.

Orin Leads at the Edge

In edge computing, NVIDIA Orin ran every MLPerf benchmark, winning more tests than any other low-power system-on-a-chip. And it showed up to a 50% gain in energy efficiency compared to its debut on MLPerf in April. In the previous round, Orin ran up to 5x faster than the prior-generation Jetson AGX Xavier module, while delivering an average of 2x better energy efficiency.

Orin integrates into a single chip an NVIDIA Ampere architecture GPU and a cluster of powerful Arm CPU cores. It’s available today in the NVIDIA Jetson AGX Orin developer kit and production modules for robotics and autonomous systems and supports the full NVIDIA AI software stack, including platforms for autonomous vehicles (NVIDIA Hyperion), medical devices (Clara Holoscan), and robotics (Isaac) .

About the author


Leave a Comment