Inspur's AI server just shattered a world speed record
“NVIDIA A100 Tensor Core GPUs offer customers unmatched acceleration at every scale for AI, data analytics and HPC,” commented back in May. “Inspur AI servers, powered by NVIDIA A100 GPUs, will help global users eliminate their computing bottlenecks and dramatically lower their cost, energy consumption, and data centre space requirements.”
The server, which is designed to deliver ultra-high-speed bandwidth while running AI applications - the company lists scenarios like intelligent customer service, financial analysis, smart city, and intelligent language processing - recorded its results using MLPerf, an in dustry-standard AI benchmarking organisation.
According to Inspur, is the most influential industry benchmarking organisation in the field of AI around the world. Established in May 2018, MLPerf is supported and participated in by a number of industry giants and academic institutions, including Amazon, Baidu, Facebook, Google, Harvard University, Intel, NVIDIA, Microsoft, Alibaba, Inspur, and Stanford University.
The MLPerf 0.7 training benchmark included 8 tasks focusing on typical deep learning scenarios like image classification, object detection, reinforcement learning, recommendation, and translation. A total of 9 organisations participated in the training benchmark and submitted results, namely Google, NVIDIA, Intel, Alibaba, Tencent, Inspur, Dell, Fujitsu, and SIAT.
Resnet50 is the world’s most widely-accepted standard for evaluating the performance of AI computing systems and AI chips. In the Resnet50 training task of this benchmark, Inspur’s NF5488A5 server completed the ResNet50 model training in only 33.37 minutes.
That makes it the fastest single AI server on the market today.