In a significant shift from its traditional closed ecosystem approach, Nvidia CEO Jensen Huang unveiled NVLink Fusion at Computex 2025 in Taiwan this week, opening the company’s proprietary AI architecture to competitors and partners alike. This strategic move represents what Bank of America analysts called the “most impactful” announcement at this year’s conference.
Listen to our in-depth chat about NVLink Fusion and it’s impact on our future:
Breaking Down Barriers in AI Infrastructure
NVLink Fusion fundamentally transforms how AI systems can be built by allowing customers to create semi-custom AI infrastructure that incorporates non-Nvidia components alongside Nvidia’s hardware in rack-scale solutions. As Huang explained during his presentation:
“We’re announcing Nvidia NVLink Fusion. NVLink Fusion, it’s so that you can build semi-custom AI infrastructure. Not just semi-custom chips. Because those are the good old days. You want to build AI infrastructure and everybody’s AI infrastructure could be a little different.”
The technology addresses the growing demand for flexibility in AI computing architectures. Some systems require more CPUs, others more Nvidia GPUs, and some might incorporate specialized ASICs (Application-Specific Integrated Circuits). What they’ve all lacked, according to Huang, is “this incredible ingredient called NVLink” that enables these diverse components to scale effectively.
Tom’s Hardware notes that NVLink has been one of Nvidia’s key competitive advantages in AI workloads, as communication speeds between GPUs and CPUs represent one of the largest barriers to scalability, performance, and power efficiency. The proprietary interconnect delivers up to 14 times more bandwidth than standard PCIe interfaces, while maintaining compatibility with PCIe’s electrical interface.
Technical Specifications and Capabilities
The fifth-generation NVLink platform delivers impressive technical specifications, providing 800Gbps of throughput with ConnectX-8 SuperNICs, Spectrum-X, and Quantum-X800 InfiniBand switches. According to Data Center Dynamics, the platform can provide a total bandwidth of 1.8Tbps per GPU—14 times faster than PCIe Gen5—with support for co-packaged optics coming soon.
This robust interconnect technology will allow cloud providers to scale up AI factories to “millions of GPUs” using any ASIC in combination with Nvidia’s rack-scale systems and networking platform. The practical implications are substantial: customers can now build powerful AI computing systems with much greater flexibility in component selection.
An Expanding Ecosystem
Nvidia has secured an impressive roster of initial partners for NVLink Fusion. Qualcomm and Fujitsu will be the first to integrate the technology into their CPUs. MediaTek, Marvell, Alchip Technologies, Astera Labs, Synopsys, and Cadence have also signed on as early adopters for “model training and agentic AI inference.”
While showcasing these partnerships, Huang made his preferences clear with a touch of humor: “Nothing gives me more joy than when you buy everything from Nvidia. I just want you guys to know that. But it gives me tremendous joy if you just buy something from Nvidia.”
The expansion doesn’t stop with hardware manufacturers. Nvidia has transferred its intellectual property to companies like Cadence and Synopsys, enabling them to make NVLink technology available to chip designers across the industry. This creates what Huang described as an “incredible” ecosystem that allows partners to “instantly get integrated into the entire larger NVIDIA ecosystem” for scaling up into AI supercomputers.
This strategic opening of Nvidia’s walled garden comes at a pivotal moment in the AI hardware race, potentially reshaping competitive dynamics while ensuring Nvidia maintains its central position in the explosive growth of AI computing infrastructure.