“NVIDIA has emerged as a founding member of the Optical Compute Interconnect (OCI) Multi-Source Agreement alongside AMD, Broadcom, Meta, Microsoft, and OpenAI. This collaborative effort aims to establish open standards for optical scale-up interconnects in AI clusters, replacing traditional copper limitations with high-speed, energy-efficient fiber optics capable of scaling to 3.2 Tb/s and beyond. The initiative addresses critical bottlenecks in massive AI infrastructure deployments, enabling protocol-agnostic connectivity, greater multi-vendor interoperability, and unprecedented cluster sizes while maintaining low power, latency, and costs.”
NVIDIA’s Strategic Move into Optical Interconnects
NVIDIA’s decision to co-found the Optical Compute Interconnect (OCI) Multi-Source Agreement marks a pivotal step in addressing the escalating demands of AI-driven data centers. As AI models grow exponentially in complexity and scale, the need for ultra-high-bandwidth, low-latency connections between thousands or millions of accelerators has outpaced traditional copper-based electrical signaling. Copper interconnects face severe distance and power constraints, typically degrading beyond one meter at extreme speeds, which restricts cluster sizes and efficiency in hyperscale environments.
The OCI MSA focuses on defining a protocol-agnostic physical layer (PHY) for short-reach scale-up interconnections within AI racks and systems. This open specification shifts from module-centric designs to silicon-centric models, integrating optics directly with compute and networking silicon. By leveraging non-return to zero (NRZ) modulation combined with wavelength division multiplexing (WDM), the OCI approach promises optimized power, latency, and cost profiles. Initial generations target 200 Gbps per direction in GEN1 (4λ x 50 Gbps NRZ) and scale to bidirectional 400 Gbps or higher in GEN2, potentially reaching up to 800 Gbps per fiber and evolving toward 3.2 Tb/s in future iterations.
This standardization enables hyperscalers to deploy optical cables instead of copper, supporting larger domains with predictable performance. It fosters a multi-vendor supply chain, reducing dependency on proprietary solutions and accelerating innovation across the ecosystem. NVIDIA’s involvement ensures compatibility with its NVLink protocol, while accommodating others like UALink from AMD and Broadcom partners. The result is a unified fiber infrastructure that allows diverse processors and interconnect protocols to coexist seamlessly.
Overcoming Copper’s Physical Limits in AI Clusters
Current AI data centers rely heavily on electrical interconnects for intra-rack and short-range connections, but the push toward exascale AI factories exposes copper’s inherent drawbacks. Signal degradation, high power consumption, and limited reach hinder the scaling required for next-generation training and inference workloads. Optical interconnects eliminate these barriers by transmitting data via light over fiber, offering vastly superior bandwidth density and energy efficiency.
Industry forecasts indicate steady adoption of co-packaged optics (CPO) technologies, with penetration in AI data center optical modules expected to climb significantly over the coming years. NVIDIA’s architecture, including NVLink 6 with 400G SerDes per lane and up to 3.6 TB/s per GPU bandwidth, positions optical solutions as essential for future generations like Rubin and beyond. Scale-out inter-rack transmission could adopt silicon photonics and CPO first, with scale-up spanning multiple racks following as packaging matures.
The OCI MSA builds on these trends by promoting tighter integration of optics with silicon, unlocking gains in system scalability. This silicon-centric paradigm reduces the reliance on pluggable transceivers, which introduce higher power draw and failure points, and instead embeds optical engines directly onto ASICs for enhanced resiliency and efficiency.
Broader Implications for AI Infrastructure Scaling
The formation of the OCI MSA reflects a collective recognition among leading players that proprietary approaches alone cannot sustain the AI boom. By establishing common optical standards, the group enables hyperscalers to build interoperable clusters that mix hardware from multiple vendors without compatibility headaches. This plug-and-play ecosystem is crucial as AI infrastructure expands into gigawatt-scale facilities requiring millions of interconnected GPUs.
NVIDIA’s leadership in this initiative aligns with its broader push into advanced optics. Recent partnerships with optical specialists emphasize investments in manufacturing capacity, research, and silicon photonics to support global AI buildouts. These efforts target ultrahigh-bandwidth, energy-efficient connectivity essential for AI factories, where power efficiency directly impacts operational costs and sustainability.
In practice, OCI-compliant designs could transform data center topologies. Switches and accelerators would connect via standardized optical links, supporting protocols like NVLink for NVIDIA systems and equivalents for others. This flexibility benefits end-users by allowing customized configurations while maintaining high performance across massive domains.
Key Technical Advancements Enabled by OCI
Bandwidth Scaling : Starting at 200 Gbps/direction and targeting multi-terabit per fiber capabilities.
Power and Latency Optimization : Matching or exceeding copper efficiency while extending reach.
Interoperability : Protocol-agnostic PHY supports NVLink, UALink, and future standards.
Density Improvements : Silicon-centric integration boosts bandwidth per unit area and reduces thermal overhead.
Multi-Vendor Ecosystem : Open specification promotes competition and supply chain resilience.
These elements position optical compute interconnects as foundational to the next era of super-intelligence infrastructure. As AI workloads demand ever-greater scale, standardized optical links will be instrumental in overcoming current bottlenecks and enabling efficient, massive parallelism.
Disclaimer: This is a news report based on industry developments and does not constitute investment advice.