2025 OCP Summit》Intel and Samsung OEM enter the ecosystem, NVIDIA launches Vera Rubin architecture high-efficiency million-watt AI factory
At the Open Compute Project Global Summit (OCP Global Summit), NVIDIA (NVIDIA) brings the future development of GW-level artificial intelligence (AI) factories, including NVIDIA Vera Rubin NVL144 MGX next-generation open architecture rack server, next-generation 800-volt DC design, and the expanded NVIDIA NVLink Fusion ecosystem.
More than 50 MGX partners are making preparations for the specifications of the NVIDIA Vera Rubin NVL144 MGX new generation open architecture rack server and providing ecosystem support for NVIDIA Kyber. NVIDIA Kyber can connect to 576 RubinUltra GPUs to handle growing inference demands.
In addition, more than 20 industry partners are demonstrating next-generation chips, components and power systems, as well as support for next-generation million-kilowatt-class 800-volt direct current (VDC) data centers that will support NVIDIA Kyber rack architecture. Hon Hai Technology Group has unveiled details of its 800 VDC, 40 MW data center Kaohsiung-1 in Taiwan. Vendors such as CoreWeave, Lambda, Nebius, Oracle Cloud Infrastructure and Together AI are also designing 800-volt data centers.
In addition, Vertiv released the space-saving, cost-effective and energy-efficient 800 VDC MGX reference architecture, a complete power supply and cooling infrastructure architecture. HPE announced that its products will support NVIDIA Kyber and NVIDIA Spectrum-XGS Ethernet extensions, which are part of the Spectrum-X Ethernet platform.
NVIDIA said that moving from traditional 415 or 480 volt alternating current (VAC) three-phase systems to 800 VDC infrastructure can improve data center scalability, increase energy efficiency, reduce material use, and increase performance. The electric vehicle and solar industries have adopted 800 VDC infrastructure to achieve similar benefits. The Open Compute Project, founded by Meta, is an industry alliance composed of hundreds of computing and network vendors. It is more focused on redesigning hardware technology to effectively support the growing demand for computing infrastructure.
Extension design for AI factories Vera Rubin NVL144The Vera Rubin NVL144 MGX computing tray adopts an energy-saving, 100% water-cooled modular design. Its central printed circuit board interposer backplane replaces traditional cable connections for faster assembly and maintenance, and is equipped with modular expansion slots to support NVIDIA ConnectX-9 800GB/s networking and NVIDIA Rubin CPX large-scale situational inference. On this basis, NVIDIA Vera Rubin NVL144 brings a major leap forward in accelerated computing architecture and AI performance, specially designed for the needs of advanced inference engines and AI agents.
NVIDIA Vera Rubin NVL144 is also built around the MGX rack architecture and will be supported by more than 50 MGX system and component partners. NVIDIA plans to contribute upgraded rack and compute tray innovations to the OCP Alliance as open standards. The OCP Alliance's computing tray and rack standards allow partners to freely match them in a modular manner and expand faster as the architecture expands. The Vera Rubin NVL144 rack design uses an energy-saving 45°C water cooling system, equipped with a new liquid-cooled bus to improve performance and increase energy storage capacity by 20 times to ensure stable power supply. MGX upgrades the computing tray and rack architecture to improve the performance of the AI factory and simplify the assembly process, enabling the rapid construction of million-scale AI infrastructure.
NVIDIA pointed out that NVIDIA is a major contributor to the OCP standard across multiple generations of hardware, which includes key electromechanical design parts of the NVIDIA GB200 NVL72 system. The same MGX rack specification not only supports GB300 NVL72, but will also support Vera Rubin NVL144, VeraRubin NVL144 CPX and Vera Rubin CPX in the future to achieve higher performance and faster deployment.
NVIDIA Kyber creates a new generation of rack serversThe OCP ecosystem is also preparing for NVIDIA Kyber. The innovation lies in the introduction of 800 VDC power supply, water cooling and mechanical design. These innovations will drive the transition to the NVIDIA Kyber rack server generation. NVIDIA Kyber is the platform that will succeed NVIDIA Oberon and is expected to be equipped with a high-density platform housing 576 NVIDIA Rubin Ultra GPUs in 2027. The most effective way to address high-power distribution challenges is to increase voltage. Transitioning from a traditional 415 or 480 VAC three-phase system to an 800 VDC architecture provides multiple benefits. This transition allows rack server partners to upgrade the 54 VDC components inside the rack to 800 VDC for better results.
NVIDIA Kyber is designed to increase GPU density in the rack, expand network scale, and maximize the performance of large-scale AI infrastructure. By rotating and arranging the computing blades vertically like books on a bookshelf, each Kyber chassis can accommodate up to 18 sets of computing blades. At the same time, dedicated NVIDIA NVLink switching blades are integrated behind the chassis through a wireless intermediary backplane to achieve seamless network expansion
With 800 VDC, the same copper wire can transmit 150% more power, eliminating the need for a 200-kilogram copper bus to power a single rack. Kyber will become a basic element of hyperscale AI data centers, bringing superior performance, efficiency, and reliability to the most advanced generative AI workloads in the coming years. NVIDIA Kyber racks can help customers reduce the use of tons of copper, saving millions of dollars.
NVIDIA NVLink Fusion ecosystem expands, Intel and Samsung OEMs also joinIn addition to the hardware level, NVIDIA NVLink Fusion is accelerating its development to help enterprises seamlessly integrate semi-customized chips into highly optimized and widely deployed data center architectures, thereby reducing complexity and accelerating time to market. Intel and Samsung foundries have joined the NVLink Fusion ecosystem, which includes customized chip designers, CPU and IP partners to help AI factories rapidly expand to handle high-intensity workloads such as model training and agent-based AI inference.
According to the recently announced cooperation plan between NVIDIA and Intel, Intel uses NVLink Fusion to build x86 CPUs that can be integrated into NVIDIA infrastructure platforms. In addition, Samsung Foundry cooperates with NVIDIA to meet the growing demand for customized CPUs and customized XPUs, providing full-process experience from design to manufacturing for customized chips.
To expand the next generation of AI factories, an open ecosystem is indispensableNVIDIA stated that more than 20 partners are working together to provide rack servers using open standards to enable the million-scale AI factory of the future. Includes:
Chip suppliers: Analog Devices, Inc. (ADI), AOS, EPC, Infineon, Innoscience, MPS, Navitas, onsemi, Power Integrations, Renesas, Richtek Technology, ROHM, STMicroelectronics, Texas Instruments.
Power system component suppliers: BizLink, Delta, Flex, GE Vernova, Lead Wealth, Lite-On Technology, Megmeet.
Data center power system suppliers: ABB, Eaton, GE Vernova, Heron
View this post on Instagram
Post shared by TechNews Technology News (@technewsinside)




