How DeepL Will Expand AI In Europe with Nvidia DGX SuperPOD

Share
NVIDIA DGX SuperPOD will be used to power research computation (Image: Nvidia)
DeepL expands AI translation capabilities with Nvidia’s latest supercomputing technology to bolster business-level language processing across Europe

AI language leader DeepL is deploying Nvidia’s powerful AI system to expand its AI infrastructure capabilities. 

Debuting the first Nvidia DGX GB200 SuperPOD in Europe, DeepL’s purchase is its largest to date and aims to fuel the company’s industry-leading Language AI platform. The Nvidia DGX SuperPOD, which is expected to be operational at DeepL by mid-2025, will be used to power research computation.

NVIDIA DGX GB200 SuperPOD (Image: Nvidia)

The company hopes Nvidia’s system will offer it the additional computing power required to train new AI models, in addition to developing features and products to enhance its Language AI platform. Currently, the platform is already breaking down language barriers for businesses and professionals worldwide.

Advancing the next stage of enterprise AI

Nvidia's DGX GB200 SuperPOD was first unveiled by the world-leading chipmaker earlier in 2024 at its GTC event. Known as a supercomputer, it is able to process trillion-parameter models with constant uptime to better scale generative AI (Gen AI) training.

It is essentially a data-centre-scale AI supercomputer that can integrate with high-performance storage to meet the business needs of Gen AI workloads. This is a necessary offering, according to Nvidia, given the sheer global demand for AI across businesses worldwide.

Youtube Placeholder

Particularly across Europe, AI demand is only set to increase - putting greater pressures on data centres and tripling their energy use by 2030, according to McKinsey. Whilst AI is currently cited as the cause of this surge, it could also be a solution for businesses to run their data centres more efficiently by using smarter systems.

With this in mind, the DGX SuperPOD supercomputer features a liquid-cooled rack-scale architecture, which Nvidia states is highly efficient. These systems also include Nvidia's GB200 Grace Blackwell superchips, which are designed to deliver a much faster performance for large language model (LLM) workloads.

The superchip clusters:
  • Purpose-built to deliver extreme performance
  • Consistent uptime for superscale Gen AI training and inference workloads

Using these capabilities, DeepL aims to run its high-performance AI models, which are necessary for its advanced Gen AI applications. 

“DeepL has always been a research-led company, which has enabled us to develop Language AI for translation that continues to outperform other solutions on the market,” says Jarek Kutylowski, CEO and Founder of DeepL. 

Jarek Kutylowski, CEO and Founder of DeepL

“This latest investment in Nvidia accelerated computing will give our research and engineering teams the power necessary to continue innovating and bringing to market the Language AI tools and features that our customers know and love us for.”

The need for greater processing power

This is the third deployment of a Nvidia DGX SuperPOD by DeepL and offers more processing power than DeepL Mercury, a Top500 supercomputer, as well as DeepL's previous flagship DGX SuperPOD with DGX H100 systems, which was deployed a year ago in Sweden. The latest deployment will be in the same Swedish data-centre. 

With a rapidly-growing customer network of more than 100,000 businesses and governments around the world, including 50% of the Fortune 500 and industry leaders like Zendesk, Nikkei, Coursera and Deutsche Bahn, DeepL is already revolutionising global communication with its groundbreaking Language AI platform. 

The company's industry-leading translation and writing tools empower businesses to break down language barriers, expand into new markets and drive unprecedented cross-border collaboration.

“Customers using Language AI applications expect nearly instant responses, making efficient and powerful AI infrastructure critical for both building and deploying AI in production,” explains Charlie Boyle, Vice President of the Nvidia DGX platform at Nvidia.

Charlie Boyle, Vice President of the NVIDIA DGX platform at NVIDIA

“DeepL’s deployment of the latest Nvidia DGX SuperPOD will accelerate its Language AI research and development, empowering users to communicate more effectively across languages and cultures.” 

This announcement with Nvidia’s AI systems is the latest in a series of significant developments for DeepL. Already in 2024, the company has unveiled a tech hub in New York, in addition to updates to its Glossary feature and its unveiling of its next-generation LLM.

Aiming to set a new standard for personalisation, accuracy and performance, DeepL was recently named on Forbes’ 2024 Cloud 100 list and has raised US$300m of new investment, leading it to a US$2bn valuation in May 2024.


Make sure you check out the latest edition of Data Centre Magazine and also sign up to our global conference series - Tech & AI LIVE 2024


Data Centre Magazine is a BizClik brand

Share

Featured Articles

Google Partnership to Confront Data Centre Energy Challenges

Intersect Power and TPG Rise Climate are joining forces with the technology giant Google to develop co-located clean energy facilities for data centres

Vertiv & Compass Datacenters: Boosting Liquid Cooling for AI

Vertiv & Compass Datacenters develop combination liquid and air cooling systems to accelerate deployment of liquid cooling for data centre AI applications

Microsoft Unveils Zero-Water Cooling for AI Data Centres

Sustainability initiative promises elimination of cooling-related water consumption as Microsoft responds to growing water stress in key markets

How IBM's Optical Breakthrough Could Support AI Data Centres

Technology & AI

How Evolution Data Centres & CREC Power Green Data Centres

Hyperscale

Lenovo: Data Centre Regulations to Transform Sustainability

Critical Environments