What's New for Cisco X9508 M6 7U Enclosure
- Supports Up to (8) Nodes
- (4) Nodes Options | x210c, x210c Universal, x210c GPU or X440p GPU
- (2) Intelligent Fabric Modules (IFMs)
- (2) X-Fabric Modules (X-FMs)
- (6) Hot Swap Fans
The Cisco UCS X-Series Modular System starts with the X9508 chassis, designed for adaptability and hybrid cloud integration. Its midplane-free architecture enables I/O connectivity via front-loading compute nodes intersecting with rear I/O modules, supported by PCIe Gen4 and future protocols.
The 7RU chassis houses up to 8 flexible slots for compute nodes, GPUs, disk storage, and NVMe resources. Two intelligent fabric modules connect to Cisco UCS 6400/6536 Fabric Interconnects, while X-Fabric Technology facilitates modular updates for evolving technologies. Six 2800W PSUs provide efficient power delivery with multiple redundancy options. Advanced thermal management supports future liquid cooling for high-power processors.
The Cisco UCS X-Series Direct adds self-contained integration with internal Cisco UCS Fabric Interconnects 9108, enabling unified fabric connectivity and management through Cisco Intersight or UCS Manager.
CTO Configuration Support
Need help with the configuration? Contact us today!
Node Options
UCS x210c Node M6
1U
(2) Intel 3rd Gen Processors
(32) DDR4 DIMMs
Up to (6) SAS/SATA/NVMe Drives
Up to (2) GPUs
Up to (8) Per x9508 Enclosure
Cisco x210c Node Options UCSX-X10C-PT4F - Up to 6 NVMe drives
UCSX-X10C-RAIDF - Up to 6 SAS/SATA or 4 NVMe drives
UCSX-X10C-GPUFM - Up to 2 NVIDIA T4 GPUs & 2 NVMe drives
UCS x440p Node
1U
Up to (2) NVMe Drives
Up to (4) GPUs
Up to (4) Per x9508
Requires Paired with (1) x210c Compute Node
(1) X-Fabric must be installed per x440p
Mezzanine Risers have to match in paried Nodes.
-
Cisco UCS x440p Spec Sheet
-
Cisco UCS x440p Data Sheet
-
Cisco UCS x440p Service Guide
The UCSX-V4-PCIME or UCSX-V4-Q25GME is required when a x210c compute node is paired with a x440p PCIe node.
Fabric Options
Choose two Fabrics of same type
You can not mix IFM and integrated FI in the same chassis
UCSX-I-9108-25G
Intelligent Fabric Modules 8x 25-Gbps SFP28 ports
Up to 50 Gbps of unified fabric connectivity per compute node with two IFMs.
UCSX-I-9108-100G
Intelligent Fabric Modules 8x 100-Gbps QSFP8 ports
Up to 200 Gbps of unified fabric connectivity per compute node with two IFMs.
UCSX-S9108-100G
Fabric Interconnect Module 8 Ports 1/10/25/40/100 Gb
Ethernet, FCoE, and Fibre Channel
Intelligent Fabric Modules Up to two Intelligent Fabric Modules (IFMs) serve as line cards, managing data multiplexing, chassis resources, and compute node communication. They ensure redundancy and failover with paired configurations.
Each IFM features 8x SFP28 (25 Gbps) or 8x QSFP28 (100 Gbps) connectors, linking compute nodes to fabric interconnects. Compute nodes interface with IFMs via upper mezzanine cards (mLOMs) using orthogonal connectors. Supported configurations include UCS 9108-25G, 9108-100G IFMs, or Fabric Interconnect 9108 100G.
Fabric Interconnect The Cisco UCS Fabric Interconnect 9108 100G is a high-performance switch offering up to 1.6 Tbps throughput with 6x 40/100-Gbps Ethernet ports and 2 unified ports supporting Ethernet or 8 Fibre Channel ports (8/16/32-Gbps). All ports support FCoE, and breakout options enable 10/25-Gbps Ethernet or 1-Gbps Ethernet.
It provides eight 100G or thirty-two 25G backplane connections to X-Series compute nodes, depending on the VIC used. Additional features include a network management port, console port, and USB port for configuration.
mLOM / Mezzanine Cards
X-Fabric Modules (X-FMs) | UCSX-F-9416
The X-Fabric modules are required when the server chassis contains the Cisco UCS X440p PCIe node
Each X-Fabric module provides native PCIe Gen4 x16 connectivity to the X210c or X410c Compute node and the Cisco UCS X440p PCIe Node.
The X-Fabric module is not required if your server chassis contains only Cisco UCS X-Series compute nodes, such as the Cisco UCS X210c.
X-Fabric Modules are always deployed in pairs to support GPU acceleration through the Cisco UCS X440p
PCIe nodes. Therefore, two PCIe modules must be installed in a server chassis that contains any number of
PCIe nodes
Configuring Cisco UCS X210c or X410c Compute Nodes in the X9508 chassis with mLOM and mezzanine VICs provides up to 200 Gbps network bandwidth and enables connectivity to PCIe I/O devices via X-Fabric modules. These modules allow direct PCIe Gen4 connections between compute nodes, storage, and communication devices, reducing cost, power, and latency.
The Cisco UCS X9416 X-Fabric supports x16 high-speed PCIe Gen4 links from each module to compute nodes, with slots located at the chassis rear. Compute nodes connect directly to the X-Fabric modules via mezzanine cards, eliminating the need for a midplane.
GPU Support | (3) GPU Options
x210c SW GPUs | UCSX-X10C-GPUFM
Supports (2) SW GPUs
Supports (2) U.2/U.3 NVMe Drives
Does not need the X-Fabric
x440p Riser Type A | UCSX-RIS-A-440P
Supports (2) DW GPUs
Requires (1) x210c Compute Node
Node x210c requires (1) (UCSX-V4-PCIME or UCSX-V4-Q25GME)
Requires the X-Fabric Modules
x440p Riser Type B | UCSX-RIS-B-440P
Supports (4) SW GPUs
Requires (1) x210c Compute Node
Node x210c requires (1) (UCSX-V4-PCIME or UCSX-V4-Q25GME) installed
Requires the X-Fabric Modules
For the server chassis to support any number of Cisco UCS X440p PCIe Nodes, both Cisco UCS X9416 X-Fabric Modules must be installed to provide proper PCIe signaling and connectivity to the node slots on the front of the server chassis.
The x210c and the x440p need to have the same Mezzanine cards installed so they can both talk to the X-Fabric. This X-Fabric is the pathway for the x210c to communicate to the x440p node server.
Summary Overview
x210c Configuration Overview
When the X210c Compute Node is installed, the mLOM card connects directly to the Fabric Module (IFM or FI) at the top rear of the chassis, enabling networking and management traffic. For the GPU version of the X210c, the PCIe Mezzanine card must also be installed to provide the additional PCIe lanes required to support the GPUs and NVMe drives. This configuration ensures sufficient PCIe bandwidth for the compute-intensive and storage needs of the GPU version while maintaining connectivity to the Fabric Module for external networking.
x440p Configuration Overview
If the X440p PCIe Node is installed, the X210c Compute Node requires a mezzanine card, which can be a PCI Mezzanine Card, VIC 14825, or VIC 15422. When using a VIC (14825 or 15422), a bridge card is required to connect the mLOM VIC to the Mezzanine VIC, enabling proper PCIe connectivity.
In this configuration, 2 PCIe lanes are allocated to the X-Fabric modules at the bottom of the chassis to connect to the X440p, while the remaining PCIe lanes are routed to the Fabric Module (IFM or FI) for external networking. Conversely, when the PCI Mezzanine Card is installed, no bridge card is needed, but the PCIe lane allocation remains the same: 2 lanes to the X-Fabric and the rest to the IFM or FI. This setup ensures efficient connectivity for both internal PCIe resources and external networking capabilities.
Power Supply Units
The X9508 chassis accommodates up to six power supplies. The six dual feed power supplies
provide an overall chassis power capability of greater than 9000 W, and can be configured as N,
N+1, N+2, or N+N redundant.
Choose from 2 to 6 power supplies - If node quantity 1 is selected, then minimum 2 quantity of PSU is required
- If node quantity 2 to 6 is selected, then minimum 4 quantity of PSU is required
- If node quantity 7 or 8 is selected, then minimum 6 quantity of PSU is required