Cirrascale Cloud Services Integrates the NVIDIA HGX H200 into Its AI Innovation Cloud

Date:

Share post:

Cirrascale Cloud Services®, a leading provider of innovative cloud solutions for AI and high-performance computing (HPC) workloads, today announced the general availability of NVIDIA HGX H200 servers in its AI Innovation Cloud. The new offering empowers organizations to scale AI and HPC workloads with unprecedented speed, efficiency, and flexibility.

The NVIDIA HGX H200 server platform is available in the form of integrated baseboards in eight NVIDIA H200 Tensor Core GPU configurations, which offer full GPU-to-GPU bandwidth through NVIDIA NVLink and NVSwitch interconnect. Leveraging the power of H200 GPU multi-precision Tensor Cores, an eight-way HGX H200 provides up to 32 petaFLOPS of FP8 deep learning compute and over 1.1TB of aggregate HBM3e memory for high performance in generative AI and HPC applications. Cirrascale instances of the HGX H200 include advanced networking options — at speeds of up to 3200 gigabits per second (Gb/s) — utilizing the NVIDIA Quantum-2 InfiniBand networking platform for advanced AI and HPC workload performance.

“Cirrascale remains at the forefront of delivering cutting-edge generative AI and HPC cloud solutions,” said Mike LaPan, vice president of Marketing, Cirrascale Cloud Services. “With the integration of the NVIDIA HGX H200 server platform into our AI Innovation Cloud, we’re empowering our customers with advanced processing capabilities, allowing them to accelerate AI innovation and deploy models with unprecedented speed and efficiency.”

Also Read: Harnessing Cloud Technologies in Software Development

The NVIDIA H200 Tensor Core GPU offers groundbreaking enhancements in accelerated computing. They are the first GPUs to feature 141 gigabytes (GB) of HBM3e memory with a memory bandwidth of 4.8 terabytes per second (TB/s)—nearly double the capacity of NVIDIA H100 Tensor Core GPUs and with 1.4 times more memory bandwidth. These upgrades supercharge generative AI and large language models while delivering significant advancements in scientific computing for HPC workloads with better energy efficiency and lower total cost of ownership (TCO).

“By deploying the NVIDIA HGX H200 accelerated computing platform, Cirrascale can provide its customers with the technology needed to develop cutting-edge generative AI, natural language processing, and HPC model applications,” said Shar Narasimhan, Director of Data Center GPUs and AI at NVIDIA. “Our collaboration with Cirrascale will help propel AI and HPC exploration forward to drive a new wave of industry breakthroughs.”

Check Out The New TalkDev Podcast. For more such updates follow us on Google News TalkDev News.

TalkDev Bureau
TalkDev Bureau
The TalkDev Bureau has five well-trained writers and journalists, well versed in B2B enterprise technology industry, and constantly in touch with industry leaders for the latest trends, opinions, and other inputs- to bring you the best and latest in the domain.
spot_img

Related articles

CPU-Z 2.12 Adds Support for New AMD and Intel Processors

CPUID has launched CPU-Z version 2.12, enhancing support for the latest AMD and Intel processors. This update includes...

Minisforum Unveils Budget V3 SE 3-in-1 Windows PC

Minisforum has unveiled the V3 SE, a budget-friendly variant of its 3-in-1 Windows PC. This model retains the...

Symbiotic Security Unveils Groundbreaking Real-Time Security Solution for Software Development

Symbiotic Security has launched a real-time software development security solution, combining detection, remediation, and just-in-time training. This innovative...

Role of 5G in Application Development

he rise of 5G technology is one of the most significant advancements in telecommunications. Unlike its predecessors (3G...