Published on March 19, 2024, 2:44 am

Hewlett Packard Enterprise Co. Introduces Cutting-Edge Generative Ai Supercomputing Platforms In Collaboration With Nvidia

Hewlett Packard Enterprise Co. has recently revealed its long-awaited generative artificial intelligence supercomputer platforms, aimed to assist businesses in developing, refining, and operating robust large language models within their own data centers. The unveiling coincided with HPE and Supermicro Inc. introducing substantial updates to their generative artificial intelligence workload offerings.

Among the notable additions are powerful new servers incorporating Nvidia Corp.’s latest cutting-edge graphics processing units – the Blackwell GPUs. These servers, announced at GTC 2024, highlight the collaboration between HPE and Nvidia to harness high-performance computing proficiency in constructing a comprehensive generative AI supercomputer that furnishes developers with essential software and services for advanced model creation alongside potent compute capabilities.

The supercomputing platform for generative AI by HPE, initially introduced last November, is now available for purchase. Tailored as a full-stack solution for building and training large language models, this system boasts Nvidia’s GH200 Grace Hopper Superchips at its core. It offers everything necessary to delve into generative AI, encompassing a liquid-cooling system, accelerated compute functions, networking capabilities, storage provisions, and AI services.

This offering targets large enterprises, research institutions, and government entities and can be procured directly or through the HPE GreenLake pay-per-use model. Preconfigured for fine-tuning and inference workloads, it provides robust compute power, storage capacity, software infrastructure, networking support as well as consulting services geared towards facilitating companies’ initiation into the realm of generative AI.

Underneath its exterior lies a high-performance AI compute cluster fueled by a blend of HPE ProLiant DL380a Gen11 servers and Nvidia’s H100 graphics processing units. Further enhancements include the integration of Nvidia’s Spectrum-X Ethernet networking technology alongside its BlueField-3 data processing units tailored for optimizing AI operations.

Moreover, HPE has integrated its proprietary machine learning tools alongside Nvidia’s AI Enterprise 5.0 platform featuring newly introduced NIM microservices to streamline AI development processes effectively. The platform also extends support to various LLMs inclusive of both proprietary variants and open-source alternatives.

Overall designed to bridge the gap in AI expertise within enterprises, HPE Services offer vital support ranging from platform customization to project implementation on-premises. President and Chief Executive Antonio Neri emphasized the necessity for a hybrid cloud approach that can cater holistically to the entire AI lifecycle – from model training across diverse environments like on-premises setups or colocation facilities to edge inference applications.

Furthermore, progress was disclosed on upcoming servers based on Nvidia’s innovative Blackwell GPU architecture in debuting soon.Highlights include Supermicro’s announcement of an array of new servers integrating cutting-edge Blackwell GPUs paired with Tensor Core GPUs running on Nvidia’s HGX B100 & HGX B200 systems optimized for LLM training performance boost over prior generations.

Supermicro will spearhead innovations in server technologies with forthcoming releases featuring advanced configurations such as rack-scale systems connecting multiple GPUs leveraging latest interconnect technologies empowering enhanced performance levels.With these developments on the horizon from leading industry players like HPE and SuperMicro in conjunction with robust collaborations with tech giants like Nvidia – we witness an era where generative artificial intelligence is poised to revolutionize enterprise landscapes catering sophisticated solutions tailored to tackle modern business challenges head-on.

Share.

Comments are closed.