NVIDIA Shares Blackwell Design with Open Hardware, Boosting AI Infrastructure Innovation
Solomon Thompson / 1 month ago
NVIDIA has taken a significant step toward accelerating AI infrastructure by contributing the core design elements of its Blackwell computing platform to the Open Compute Project (OCP). This announcement, made during the OCP Global Summit, includes NVIDIA sharing key components of its GB200 NVL72 system with the OCP community. These contributions aim to enhance data center performance through improved compute density and networking bandwidth.
The contributions extend beyond hardware. NVIDIA also expands its Spectrum-X Ethernet networking platform to align with OCP standards. Jensen Huang, NVIDIA’s CEO, emphasized, “By advancing open standards, we’re helping organizations worldwide take advantage of the full potential of accelerated computing and create the AI factories of the future.”
Expanding AI Infrastructure Possibilities
The GB200 NVL72 system, based on NVIDIA’s modular MGX architecture, integrates 36 Grace CPUs and 72 Blackwell GPUs. This system delivers an impressive 30x speedup for large language models, far surpassing the H100 Tensor Core GPU.
Another key advancement is the NVIDIA Spectrum-X platform, which now supports adaptive routing and congestion control. This allows for better Ethernet performance, a critical factor for AI infrastructures.
Meta, one of NVIDIA’s partners, plans to incorporate this architecture into its AI rack system, offering flexible and energy-efficient solutions for growing data center needs.