Enhancing Data Center Networks with InfiniBand Solutions

With the rapid growth of data centers driven by expansive models, cloud computing, and big data analytics, there is an increasing demand for high-speed data transfer and low-latency communication. In this complex network ecosystem, InfiniBand (IB) technology has become a market leader, playing a vital role in addressing the challenges posed by the training and deployment of expansive models. Constructing high-speed networks within data centers requires essential components such as high-rate network cards, optical modules, switches, and advanced network interconnect technologies.

NVIDIA Quantum™-2 InfiniBand Switch

When selecting switches, NVIDIA’s QM9700 and QM9790 series stand out as the most advanced devices. Built on NVIDIA Quantum-2 architecture, they offer 64 NDR 400Gb/s InfiniBand ports within a standard 1U chassis. This breakthrough translates to an individual switch providing a total bidirectional bandwidth of 51.2 terabits per second (Tb/s), along with an unprecedented handling capacity exceeding 66.5 billion packets per second (BPPS).The NVIDIA Quantum-2 InfiniBand switches extend beyond their NDR high-speed data transfer capabilities, incorporating extensive throughput, on-chip compute processing, advanced intelligent acceleration features, adaptability, and sturdy construction. These attributes establish them as the quintessential selections for sectors involving high-performance computing (HPC), artificial intelligence, and expansive cloud-based infrastructures. Additionally, the integration of NDR switches helps minimize overall expenses and complexity, propelling the progression and evolution of data center network technologies.It can be said that NVIDIA Quantum-2 InfiniBand switches not only feature high-speed NDR data transfer capabilities but also integrate extensive throughput, on-chip compute processing, advanced intelligent acceleration features, and robust structure. These attributes make them a typical choice in the realm of High-Performance Computing (HPC), Artificial Intelligence, and a wide range of cloud-based infrastructure applications. Moreover, the integration of NDR switches helps minimize overall cost and complexity, thereby promoting the development of data center network technology.Also Check- Revolutionizing Data Center Networks: 800G Optical Modules and NDR Switches | FS Community

ConnectX®-7 InfiniBand Card

The NVIDIA ConnectX®-7 InfiniBand network card (HCA) ASIC delivers a staggering data throughput of 400Gb/s, supporting 16 lanes of PCIe 5.0 or PCIe 4.0 host interface. Utilizing advanced SerDes technology with 100Gb/s per lane, the 400Gb/s InfiniBand is achieved through OSFP connectors on both the switch and HCA ports. The OSFP connector on the switch supports two 400Gb/s InfiniBand ports or 200Gb/s InfiniBand ports, while the network card HCA features one 400Gb/s InfiniBand port. The product range includes active and passive copper cables, transceivers, and MPO fiber cables. Notably, despite both using OSFP packaging, there are differences in physical dimensions, with the switch-side OSFP module equipped with heat fins for cooling.

OSFP 800G Optical Transceiver

The OSFP-800G SR8 Module is designed for use in 800Gb/s 2xNDR InfiniBand systems throughput up to 30m over OM3 or 50m over OM4 multimode fiber (MMF) using a wavelength of 850nm via dual MTP/MPO-12 connectors. The dual-port design is a key innovation that incorporates two internal transceiver engines, fully unleashing the potential of the switch. This allows the 32 physical interfaces to provide up to 64 400G NDR interfaces. This high-density and higgh-bandwidth design enables data centers to meet the growing network demands and requirements of applications such as high-performance computing artificial intelligence, and cloud infrastructure.FS’s OSFP-800G SR8 Module offers superior performance and dependability, offering strong optical interconnection options for data centers. This module empowers data centers to harness the full performance capabilities of the QM9700/9790 series switch, supporting the transmission of data with both high bandwidth and low latency.

NDR Optical Connection Solution

Addressing the NDR optical connection challenge, the NDR switch ports utilize OSFP with eight channels per interface, each employing 100Gb/s SerDes. This allows for three mainstream connection speed options: 800G to 800G, 800G to 2X400G, and 800G to 4X200G. Additionally, each channel supports downgrade from 100Gb/s to 50Gb/s, facilitating interoperability with previous-generation HDR devices. The 400G NDR series cables and transceivers offer diverse product choices for configuring network switch and adapter systems, focusing on data center lengths of up to 500 meters to accelerate AI computing systems. The various connector types, including passive copper cables (DAC), active optical cables (AOC), and optical modules with jumpers, cater to different transmission distances and bandwidth requirements, ensuring low latency and an extremely low bit error rate for high-bandwidth AI and accelerated computing applications. Please see the article for deployment details Infiniband NDR OSFP Solution from FS community.

This entry was posted in data center and tagged , , . Bookmark the permalink.