AMD Unveils New AI Hardware for Data Centre Deployments

Share
AMD’s new Instinct MI325X accelerators, built on the company’s CDNA 3 architecture, are set to enter production in Q4 of 2024
Semiconductor firm AMD introduces high-performance accelerators and networking solutions to meet growing demand for AI infrastructure

Advanced Micro Devices (AMD) has announced a range of new products aimed at enhancing artificial intelligence (AI) capabilities in data centres, including high-performance accelerators, networking solutions and software updates designed to address the increasing demand for AI infrastructure at scale.

“The data centre and AI represent significant growth opportunities for AMD, and we are building strong momentum for our EPYC and AMD Instinct processors across a growing set of customers,” said AMD Chair and CEO Dr Lisa Su. “Looking ahead, we see the data center AI accelerator market growing to US$500bn by 2028. We are committed to delivering open innovation at scale through our expanded silicon, software, network and cluster-level solutions.”

AMD’s new Instinct MI325X accelerators, built on the company’s CDNA 3 architecture, are set to enter production in the fourth quarter of 2024. These accelerators feature 256GB of High Bandwidth Memory 3E (HBM3E), providing 6.0 terabytes per second of bandwidth.

The MI325X accelerators are designed to handle AI tasks such as training and inference of large language models. AMD reports that they offer improved performance compared to previous generations, with up to 1.3 times greater peak theoretical FP16 and FP8 compute performance.

Forrest Norrod, executive vice president and general manager of AMD’s Data Center Solutions Business Group, says: “AMD continues to deliver on our roadmap, offering customers the performance they need and the choice they want, to bring AI infrastructure, at scale, to market faster.”

System availability for the MI325X accelerators is expected from partners including Dell Technologies, Hewlett Packard Enterprise and Lenovo starting in the first quarter of 2025.

Youtube Placeholder

AMD has also previewed its next-generation Instinct MI350 series accelerators, based on the CDNA 4 architecture. These are projected to deliver a 35-fold improvement in inference performance compared to the current CDNA 3-based accelerators. The MI350 series is scheduled for release in the second half of 2025.

Networking solutions for AI infrastructure

AMD also expanded its high performance networking portfolio to address evolving system networking requirements for AI infrastructure, maximizing CPU and GPU performance to deliver performance, scalability and efficiency across the entire system.

The company has introduced two new products: the Pensando Salina Data Processing Unit (DPU) and the Pensando Pollara 400 Network Interface Card (NIC).

The Pensando Salina DPU, designed for the front-end of AI networks, supports 400 gigabit per second throughput. AMD claims it offers up to twice the performance, bandwidth, and scale compared to its predecessor.

For back-end networks, the Pensando Pollara 400 is described as the industry's first Ultra Ethernet Consortium (UEC) ready AI NIC. It aims to optimise accelerator-to-accelerator communication in AI clusters.

Both networking products are currently being sampled by customers and are expected to be available in the first half of 2025.

Software enhancements

AMD continues to invest in its software capabilities, particularly in its ROCm open software stack. The company is working to ensure support for its compute engines in popular AI frameworks and libraries, including PyTorch and Hugging Face.

The latest version of ROCm, 6.2, includes support for new AI features such as the FP8 datatype and Flash Attention 3. AMD reports that these additions have resulted in up to 2.4 times performance improvement on inference and 1.8 times on training for various large language models, compared to ROCm 6.0.

Market position

With these new offerings, AMD aims to strengthen its position in the AI hardware market, which is currently dominated by Nvidia. The company’s comprehensive approach includes not only accelerators but also processors and networking components, providing a full stack solution for AI infrastructure.

AMD’s push into the AI space comes as demand for specialised AI hardware continues to grow, driven by the widespread adoption of generative AI technologies across industries. The global AI chip market is projected to expand significantly in the coming years, presenting opportunities for semiconductor firms to capture market share.

Forrest Norrod emphasises the company’s holistic approach to AI infrastructure: “With the new AMD Instinct accelerators, EPYC processors and AMD Pensando networking engines, the continued growth of our open software ecosystem, and the ability to tie this all together into optimised AI infrastructure, AMD underscores the critical expertise to build and deploy world class AI solutions.”

******

Make sure you check out the latest edition of Data Centre Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

Data Centre Magazine is a BizClik brand

Share

Featured Articles

Lenovo: Data Centre Regulations to Transform Sustainability

Lenovo's Ian Jeffs explains how evolving sustainability regulations are reshaping data centre operations and driving technological advancement

Re:Invent 2024: How AWS & Nvidia are Powering Ahead With AI

AWS and Nvidia showcased direct liquid cooling systems, neural interface models and quantum computing developments for data centres at Re:Invent 2024

AVK Commits to Net Zero Power Solutions with Rolls-Royce

AVK and Rolls-Royce alliance delivers over 500 sustainable generators in 2024, as data centre industry embraces HVO-ready, hydrogen-capable power solutions

How AWS Plans to Boost Sustainability with Orbital Materials

Schneider Electric's AI Data Centre Solutions: Explained

AWS Unveils Next-Gen Data Centres for AI Computing Demands