IT Brief Asia - Technology news for CIOs & IT decision-makers
Story image

MSI unveils AI platforms based on NVIDIA MGX architecture

Yesterday

MSI presented its latest AI platforms based on the NVIDIA MGX reference architecture, designed to address the needs of AI, High-Performance Computing (HPC), and data-intensive applications.

MSI's new offerings include the 4U CG480-S5063 and 2U CG290-S3063 AI servers, both of which aim to enhance enterprise and cloud data centre operations through scalable performance and robust resilience. These platforms make use of modular building-block features to optimise AI workloads and enhance high-performance computing capabilities.

"AI adoption has become a critical focus for enterprise data centers, as organizations look to harness advanced AI capabilities to gain competitive advantages," stated Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. "MSI's AI servers are designed to meet these evolving needs, offering the scalability, flexibility, and resilience required to keep pace with rapidly growing AI workloads. By integrating the NVIDIA MGX reference architecture, we empower enterprises to build future-proof infrastructure that maximizes performance while minimizing complexity and downtime."

The use of NVIDIA MGX modular architecture is central to MSI's new AI platforms, providing a scalable and adaptable infrastructure for future AI needs. Both server models, the CG480-S5063 and the CG290-S3063, are tailored to meet the evolving demands in GPU and AI server technologies. This aims to drive long-term innovation and adaptability for enterprises and data centres alike.

Designed for advanced AI workloads, the CG480-S5063 4U AI server platform supports tasks such as large language models (LLMs), deep learning training, and complex data analytics. It includes dual Intel Xeon 6 processors and eight Full-Height, Full-Length (FHFL) dual-width GPU slots, supporting NVIDIA H200 NVL and NVIDIA RTX PRO 6000 Blackwell Server Edition with power capacities up to 600W. The server is equipped with 32 DDR5 DIMM slots and twenty PCIe 5.0 E1.S NVMe bays, providing high memory bandwidth and quick data access. Its modular design and comprehensive storage capabilities make it suitable for next-generation AI and HPC applications, offering significant performance and scalability.

Meanwhile, the CG290-S3063 2U AI server platform caters to diverse AI and HPC workloads. It features a single-socket Intel Xeon 6 processor and four FHFL dual-width GPU slots. The server supports both 600W front I/O and 400W rear I/O across two SKU options. With PCIe 5.0 expansion slots, eight 2.5-inch drive bays, and two M.2 NVMe slots, this model delivers flexible configurations alongside its computing power. It targets applications from small-scale inferencing to expansive AI training scenarios.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X