Tekmono
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
No Result
View All Result
Tekmono
No Result
View All Result
Home News
Nvidia still crushes AI, but AMD got a win

Nvidia still crushes AI, but AMD got a win

by Tekmono Editorial Team
03/04/2025
in News
Share on FacebookShare on Twitter

Nvidia’s GPUs dominated MLPerf’s AI benchmark, showcasing top performance in generative AI tasks, but AMD and MangoBoost managed to snag a win in one category. The tests, organized by MLCommons, assessed AI inference speeds, including new benchmarks for large language models (LLMs) and graph neural networks.

SuperMicro, Hewlett Packard Enterprise, and Lenovo, among others, used systems with up to eight Nvidia chips to secure top positions. The MLPerf benchmark now incorporates tests for common generative AI applications, like Meta’s Llama 3.1 405b and an interactive version of Llama 2 70b, simulating chatbot response times by measuring the first token output speed.

Additionally, the benchmark introduced tests for graph neural networks, relevant to programs using generative AI, and LiDAR sensing data processing for automobile mapping. Nvidia’s GPUs generally led in the closed division, which enforces strict software setup rules.

Related Reads

Microsoft enhances Copilot with multimodal features, introduces new $99 tier

Apple celebrates 50th anniversary amid scrutiny over privacy practices

Huawei launches Converged Development Engine for HarmonyOS PCs

Salesforce unveils updated Slack with 30 new AI features

AMD’s MI300X GPU, however, outperformed Nvidia in two Llama 2 70b tests, achieving 103,182 tokens per second. This AMD system was built by MangoBoost, a new MLPerf participant specializing in GPU data transfer and gen AI serving software like LLMboost.

Nvidia contested the AMD comparison, arguing for score normalization based on the number of chips and computer nodes used. Dave Salvator, Nvidia’s director of accelerated computing products, stated that MangoBoost used 32 MI300X GPUs compared to Nvidia’s 8 B200s, yet achieved only a 3.83% higher result. He added, “NVIDIA’s 8x B200 submission actually outperformed MangoBoost’s x32 AMD MI300X GPUs in the Llama 2 70B server submission.”

Google’s Trillium chip, the sixth TPU iteration, was also tested but lagged behind Nvidia’s Blackwell in Stable Diffusion image-generation query speed. The recent MLPerf benchmarks saw fewer Nvidia competitors than previous rounds; Intel’s Habana and Qualcomm had no submissions.

Intel did see success in the datacenter closed division, where its Xeon microprocessor powered seven of the top 11 systems as the host processor, compared to three for AMD’s EPYC. Nvidia, however, built the top-performing system for processing Meta’s Llama 3.1 405b using its Grace-Blackwell 200 chip, combining the Blackwell GPU with Nvidia’s Grace microprocessor.

Tags: AIAMDbenhmarkchipIntelMLPerfNvidiatest
ShareTweet

You Might Be Interested

Microsoft enhances Copilot with multimodal features, introduces new  tier
News

Microsoft enhances Copilot with multimodal features, introduces new $99 tier

02/04/2026
News

Apple celebrates 50th anniversary amid scrutiny over privacy practices

02/04/2026
News

Huawei launches Converged Development Engine for HarmonyOS PCs

02/04/2026
Salesforce unveils updated Slack with 30 new AI features
News

Salesforce unveils updated Slack with 30 new AI features

02/04/2026
Please login to join discussion

Recent Posts

  • Microsoft enhances Copilot with multimodal features, introduces new $99 tier
  • Apple celebrates 50th anniversary amid scrutiny over privacy practices
  • Huawei launches Converged Development Engine for HarmonyOS PCs
  • Salesforce unveils updated Slack with 30 new AI features
  • Meta announces release of second generation smart glasses starting April 14

Recent Comments

No comments to show.
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
Tekmono is a Linkmedya brand. © 2015.

No Result
View All Result
  • News
  • Guides
  • Lists
  • Reviews
  • Deals