NVIDIA has taken a significant step toward accelerating AI infrastructure by contributing the core design elements of its Blackwell computing platform to the Open Compute Project (OCP). This announcement, made during the OCP Global Summit, includes NVIDIA sharing key components of its GB200 NVL72 system with the OCP community. These contributions aim to enhance data center performance through improved compute density and networking bandwidth.
The contributions extend beyond hardware. NVIDIA also expands its Spectrum-X Ethernet networking platform to align with OCP standards. Jensen Huang, NVIDIA’s CEO, emphasized, “By advancing open standards, we’re helping organizations worldwide take advantage of the full potential of accelerated computing and create the AI factories of the future.”
The GB200 NVL72 system, based on NVIDIA’s modular MGX architecture, integrates 36 Grace CPUs and 72 Blackwell GPUs. This system delivers an impressive 30x speedup for large language models, far surpassing the H100 Tensor Core GPU.
Another key advancement is the NVIDIA Spectrum-X platform, which now supports adaptive routing and congestion control. This allows for better Ethernet performance, a critical factor for AI infrastructures.
Meta, one of NVIDIA’s partners, plans to incorporate this architecture into its AI rack system, offering flexible and energy-efficient solutions for growing data center needs.
After seeing the PS5 Slim price cuts in the US for Black Friday, well-known leaker…
The PowerColor Hellhound Radeon™ RX 6600 is built based on AMD’s latest RDNA2 architecture with…
With the fast-moving technology changes, GIGABYTE always follow the latest trends to provide customers with…
Rewire the rules with the Razer Thunderbolt™ 4 Dock Chroma—a sleek, customizable hub that ushers…
Gaming Upgrade: Ultra-small, reliable NVMe SSD elevates the performance of your Steam Deck, Microsoft Surface, laptop,…
TUF Gaming GT302 ARGB features an optimised square-type mesh front panel, ensuring unobstructed airflow and…