Published on November 30, 2023, 9:16 am

Aws And Nvidia Partner To Provide Enhanced Infrastructure For Generative Ai

Amazon Web Services (AWS) and Nvidia have joined forces to offer enhanced infrastructure for generative AI. Nvidia’s CUDA platform is currently unrivaled in terms of AI support, making it highly sought after. To meet the demand, AWS will provide Nvidia-based infrastructure for generative AI in collaboration with Nvidia.

Under this strategic partnership, several key projects will be undertaken. One such project is Project Ceiba, which aims to create the world’s fastest GPU-powered AI supercomputer exclusively for Nvidia and hosted by AWS. This supercomputer will integrate 16,384 Nvidia GH200 Superchips, offering an astonishing 65 ‘AI ExaFLOPS’ of processing power. It will primarily be used for Nvidia’s generative AI research and development projects.

Another significant component of the partnership is the Nvidia DGX Cloud hosted on AWS, which is an AI-training-as-a-service platform. It incorporates the GH200 NVL32 machine with 19.5 TB of unified memory, providing developers with the largest shared memory available in a single instance. This significantly accelerates the training process for advanced generative AI and large language models.

Furthermore, AWS will be the first to offer a cloud-based AI supercomputer based on Nvidia’s GH200 Grace Hopper Superchips. With NVLink connecting 32 Grace Hopper Superchips per instance, it can scale up to thousands of GH200 Superchips along with Amazon’s EFA networking support and advanced virtualization through the Nitro System.

The collaboration also includes the introduction of new Amazon EC2 instances powered by various Nvidia GPUs. These instances feature H200 Tensor Core GPUs with up to 141 GB of HBM3e memory specifically designed for large-scale generative AI and high-performance computing workloads. Additionally, G6 and G6e instances equipped with Nvidia L4 and L40S GPUs are tailored for applications ranging from AI fine-tuning to 3D workflow development.

To speed up generative AI development on AWS, Nvidia’s advanced software will be integrated. This encompasses the NeMo LLM framework and NeMo Retriever for creating chatbots and summarization tools, as well as BioNeMo for accelerating drug discovery processes.

Adam Selipsky, CEO at AWS, expressed his excitement about the partnership, emphasizing their commitment to innovation and making AWS the best place to run GPUs. Jensen Huang, founder and CEO of Nvidia, highlighted that generative AI is transforming cloud workloads and that Nvidia and AWS are collaborating across the entire computing stack to deliver cost-effective state-of-the-art generative AI to customers.

With this collaboration between AWS and Nvidia, users can expect enhanced infrastructure and accelerated development in the field of generative AI. The wide range of GPU solutions offered by AWS combined with Nvidia’s expertise will undoubtedly drive further advancements in this important sector.

Share.

Comments are closed.