ASUS announced its expanded AI infrastructure portfolio powered by NVIDIA Blackwell and Blackwell Ultra architectures at Supercomputing 2025. The lineup spans rack-scale supercomputing, enterprise AI servers, and personal development systems, designed to support varied workloads across data centers, industry, research, and individual creators.
At the rack scale, the XA GB721-E2 AI POD built on NVIDIA GB300 NVL72 integrates 72 Blackwell Ultra GPUs, Grace CPUs, liquid cooling, and networking to deliver high-density computing for enterprise and national-cloud platforms. ASUS also highlighted the ESC NM2N721-E1, based on NVIDIA GB200 NVL72, supporting national-scale and sovereign-AI deployments with combined compute and storage architecture.
The newly introduced ESC8000A-E13X, an NVIDIA RTX PRO Server, features dual AMD EPYC 9005 processors, eight NVIDIA RTX PRO 6000 Server Edition GPUs, and 400G SuperNIC connectivity for enterprise AI and industrial HPC tasks. Additional systems include the HGX B300-powered XA NB3I-E12 for high-density training and the HGX B200-based ESC NB8-E11 for scalable inference.
For creators and developers, ASUS showcased systems powered by the NVIDIA Grace Blackwell Superchip, including the ExpertCenter Pro ET900N G3 and Ascent GX10.
Leave a comment