CTO Servers
CTO Storage
HPE
Dell
NetApp
EMC
EqualLogic
Cisco

Current Configuration

(List below serves as quick links to each section)

Node Options

Processor

Memory for 4th Gen Intel

Memory for 5th Gen Intel

Controllers/Boot

Drive Options SATA/SAS

Drive Options NVMe U.2/U.3

mLOM / VIC Mezzanine Cards

Fabric Modules

X-Fabric

x440p PCIe Riser

x210c GPU

x440p GPU

Trusted Platform Module (TPM)

Warranty

Power Supply Units

The request has been sent.
Thank you.

Price as Configured:
$0

Add to Cart

Error Submitting Cart. Please get in touch with us for support.

Please select all the mandatory items in the configuration list before submitting the cart.

One or more items in your cart has no price. This will be submitted as a "Request for a Quote".

Cisco UCS X-Series X9508 M7 7U Enclosure CTO

Cisco

| CTO Servers |

UCSX-9508-M7-MLB

 

READ ME

  • Some options may not be displayed until the compatible parent option is chosen ie. Chassis – Drives, Processor – RAM etc.
  • “Quote” items can be added to your cart and sent to us via the cart page
  • Click the blue bar to close/open a section from view after choosing your options

Node Options

When installing the x440p PCIe Node, you will need to install (1) Compute Node x210c or x410c

Processor

The x210c Supports (2) Processors
The x410c Supports (4) Processors

x210c 4th Gen Intel

x210c Supports up to (2) Processors

x210c 5th Gen Intel

x210c Supports up to (2) Processors

x410c 4th Gen Intel

x410c Supports up to (4) Processors

Memory for 4th Gen Intel (Optional)

x210c Supports (32) DIMMs
x410c Supports (64) DIMMs

Memory for 5th Gen Intel (Optional)

x210c Supports up to (32) DIMMs

Controllers/Boot (Optional)

Supports either SATA or NVMe Boot Options

Drive Options SATA/SAS (Optional)

Support sup to (6) Drives

Drive Options NVMe U.2/U.3 (Optional)

RAID Node controller for 6 SAS/SATA/U.3 NVMe drives or up to 4 U.2 NVMe drives (drive slots 1-4) and SAS/SATA/U.3 NVMe (drive slots 5-6)
NVMe Node Pass-through controller for up to 6 U.2/U.3 NVMe drives
NVMe GPU Node Support sup to (2) NVMe drives

mLOM / VIC Mezzanine Cards

Please read below about which Mezzanine is needed for each system.

Fabric Modules

The Intelligent Fabric Module or the Fabrice Module are connected to the mLom Mezzanine in the Node Servers.

X-Fabric (Optional)

The X-Fabric is how the Compute Node connects to the PCIe Node.
This is mandatory when installing the x440p. You will also need the VIC 15422 or the PCIe Mezzanine card installed.

x440p PCIe Riser (Optional)

The x440p can support either (2) DW GPUs or (4) SW GPUs.
Read below on which Riser option you need.

x210c GPU (Optional)

These GPU are supported by the x210c GPU Font Mezzanine Option
Do not Mix GPUs

x440p GPU (Optional)

x440p supports Dual risers and has (2) Options.
Riser A holds (1) DW GPUs
Riser B holds (2) SW GPUs

Trusted Platform Module (TPM) (Optional)

Warranty

Power Supply Units

Supports up to (6) PSUs

Helpful Tip: Once desired configuration is selected click "Add to Cart".
From the cart page you can submit a submit a quote request for best pricing

What's New

  • Supports Up to (8) Nodes
  • (3) Nodes | x210c, x410c & x440p
  • (2) Node Options | 6x Uni or 6x NVMe
  • (2) Intelligent Fabric Modules (IFMs)
  • (2) X-Fabric Modules (X-FMs)
  • (6) Hot Swap Fans

The Cisco UCS X-Series Modular System starts with the X9508 chassis, designed for adaptability and hybrid cloud integration. Its midplane-free architecture enables I/O connectivity via front-loading compute nodes intersecting with rear I/O modules, supported by PCIe Gen4 and future protocols.

The 7RU chassis houses up to 8 flexible slots for compute nodes, GPUs, disk storage, and NVMe resources. Two intelligent fabric modules connect to Cisco UCS 6400/6536 Fabric Interconnects, while X-Fabric Technology facilitates modular updates for evolving technologies. Six 2800W PSUs provide efficient power delivery with multiple redundancy options. Advanced thermal management supports future liquid cooling for high-power processors.

The Cisco UCS X-Series Direct adds self-contained integration with internal Cisco UCS Fabric Interconnects 9108, enabling unified fabric connectivity and management through Cisco Intersight or UCS Manager.


Other Compatible Servers:

(Click to View)

Ideal for:

  • Artificial Intelligence (AI)
  • Big Data Analytics
  • Cloud Servers
  • Corporate IT
  • Data Analytics
  • Database
  • Deep Learning (DL)
  • High-Performance Computing (HPC)
  • Hyper-Converged infrastructure (HCI)
  • Language Model
  • Machine Learning (ML)
  • Network Function Virtualization (NFV)
  • Retail Servers
  • Telecom Networks
  • Video Analytics
  • Video Streaming
  • Virtualization
  • Virtual Machines (VM)
  • Web Tech
  • Anything as a service (Xaas)
Node Options

The UCSX-V4-PCIME or UCSX-V4-Q25GME is required when a x210c compute node is paired with a x440p PCIe node.

Fabric Options

Choose two Fabrics of same type
You can not mix IFM and integrated FI in the same chassis

Intelligent Fabric Modules

Up to two Intelligent Fabric Modules (IFMs) serve as line cards, managing data multiplexing, chassis resources, and compute node communication. They ensure redundancy and failover with paired configurations.

Each IFM features 8x SFP28 (25 Gbps) or 8x QSFP28 (100 Gbps) connectors, linking compute nodes to fabric interconnects. Compute nodes interface with IFMs via upper mezzanine cards (mLOMs) using orthogonal connectors. Supported configurations include UCS 9108-25G, 9108-100G IFMs, or Fabric Interconnect 9108 100G.

Fabric Interconnect

The Cisco UCS Fabric Interconnect 9108 100G is a high-performance switch offering up to 1.6 Tbps throughput with 6x 40/100-Gbps Ethernet ports and 2 unified ports supporting Ethernet or 8 Fibre Channel ports (8/16/32-Gbps). All ports support FCoE, and breakout options enable 10/25-Gbps Ethernet or 1-Gbps Ethernet.

It provides eight 100G or thirty-two 25G backplane connections to X-Series compute nodes, depending on the VIC used. Additional features include a network management port, console port, and USB port for configuration.

mLOM / Mezzanine Cards
Cisco UCS X9508 M6 Rack Enclosure mLOM / Mezzanine Cards
Modular LAN on motherboard (mLOM)
  • Cisco UCS VIC 15230 2x100G mLOM | UCSX-ML-V5D200GV2
  • Cisco UCS VIC 15231 2x100G mLOM | UCSX-ML-V5D200G
  • Cisco UCS VIC 15420 4x25G mLOM | UCSX-ML-V5Q50G
  • *15231 does not support bridging
Virtual Interface Card (VIC)
  • Cisco UCS VIC 15422 4x25G | UCSX-ME-V5Q50G
  • *They both come with a Bridging Card
PCIe Pass-Thru
  • Cisco UCSPCI Mezz card for X-Fabric | UCSX-V4-PCIME
Bridge Connector
  • UCS VIC 15000 bridge connector (UCSX-V5-BRIDGE)

The UCSX-V4-PCIME or UCSX-V4-Q25GME is required when a compute node is paired with a PCIe node
1. The VIC 15422 only works with mLOM 15420

X-Fabric Modules (X-FMs) | UCSX-F-9416
Cisco UCS X9508 M6 Rack Enclosure CTO X-Fabric Modules (X-FMs) | UCSX-F-9416
X-Fabric Modules (X-FMs) | UCSX-F-9416-D
  • Supports up to 2 per Enclosure
  • Connects x210c, x410c Compute Node to x440p Node
  • Required when x440p's are installed
  • Required when GPUs or other PCIe resources need to be shared across multiple nodes in the chassis.
Supported Configurations
  • UCS VIC 15230 mLOM + 14825 Mezz Card
  • UCS VIC 15230 mLOM + PCIe Mezz Card

The X-Fabric modules are required when the server chassis contains the Cisco UCS X440p PCIe node

Each X-Fabric module provides native PCIe Gen4 x16 connectivity to the X210c or X410c Compute node and the Cisco UCS X440p PCIe Node.

The X-Fabric module is not required if your server chassis contains only Cisco UCS X-Series compute nodes, such as the Cisco UCS X210c.

X-Fabric Modules are always deployed in pairs to support GPU acceleration through the Cisco UCS X440p PCIe nodes. Therefore, two PCIe modules must be installed in a server chassis that contains any number of PCIe nodes

Configuring Cisco UCS X210c or X410c Compute Nodes in the X9508 chassis with mLOM and mezzanine VICs provides up to 200 Gbps network bandwidth and enables connectivity to PCIe I/O devices via X-Fabric modules. These modules allow direct PCIe Gen4 connections between compute nodes, storage, and communication devices, reducing cost, power, and latency.

The Cisco UCS X9416 X-Fabric supports x16 high-speed PCIe Gen4 links from each module to compute nodes, with slots located at the chassis rear. Compute nodes connect directly to the X-Fabric modules via mezzanine cards, eliminating the need for a midplane.

GPU Support | (3) GPU Options

For the server chassis to support any number of Cisco UCS X440p PCIe Nodes, both Cisco UCS X9416 X-Fabric Modules must be installed to provide proper PCIe signaling and connectivity to the node slots on the front of the server chassis.


The x210c and the x440p need to have the same Mezzanine cards installed so they can both talk to the X-Fabric. This X-Fabric is the pathway for the x210c to communicate to the x440p node server.

Summary Overview
x210c Configuration Overview

When the X210c Compute Node is installed, the mLOM card connects directly to the Fabric Module (IFM or FI) at the top rear of the chassis, enabling networking and management traffic. For the GPU version of the X210c, the PCIe Mezzanine card must also be installed to provide the additional PCIe lanes required to support the GPUs and NVMe drives. This configuration ensures sufficient PCIe bandwidth for the compute-intensive and storage needs of the GPU version while maintaining connectivity to the Fabric Module for external networking.


x440p Configuration Overview

If the X440p PCIe Node is installed, the X210c Compute Node requires a mezzanine card, which can be a PCI Mezzanine Card, VIC 14825, or VIC 15422. When using a VIC (14825 or 15422), a bridge card is required to connect the mLOM VIC to the Mezzanine VIC, enabling proper PCIe connectivity.

In this configuration, 2 PCIe lanes are allocated to the X-Fabric modules at the bottom of the chassis to connect to the X440p, while the remaining PCIe lanes are routed to the Fabric Module (IFM or FI) for external networking. Conversely, when the PCI Mezzanine Card is installed, no bridge card is needed, but the PCIe lane allocation remains the same: 2 lanes to the X-Fabric and the rest to the IFM or FI. This setup ensures efficient connectivity for both internal PCIe resources and external networking capabilities.

Power Supply Units

The X9508 chassis accommodates up to six power supplies. The six dual feed power supplies provide an overall chassis power capability of greater than 9000 W, and can be configured as N, N+1, N+2, or N+N redundant.

Choose from 2 to 6 power supplies
  • If node quantity 1 is selected, then minimum 2 quantity of PSU is required
  • If node quantity 2 to 6 is selected, then minimum 4 quantity of PSU is required
  • If node quantity 7 or 8 is selected, then minimum 6 quantity of PSU is required
Cisco UCS X9508 M6 Rack Enclosure Power Supply Units


Welcome to the Request Express Quoting Form!

This streamlined form is designed to simplify the quoting process for you. Whether you're uncertain about specific parts or seeking a hassle-free experience, you're in the right place.
Provide us with the necessary details, and we'll generate a customized quote for your server. Even if you're unsure about the exact components you need, we've got you covered. Our team will tailor a solution based on the information you provide.

Let's get started!
Simply fill out the form, and we'll take care of the rest.


Enter your name... *
Enter your email address... *
Message *
Preferred Brand
Form Factor

Rack Unit of Height (#U)

Total Amount of Memory
Total Amount of Storage

Storage Type

RAID


List Networking Connection needed:
Rack Space Restrictions

Which OS do you plan on using: