Current Configuration
(List below serves as quick links to each section)
Chassis Options
Processors
Devlopment Mode
Storage Controller/Boot
mLOM Options
PCIe Adapters VIC/NIC
GPU options
Drive Cache
Drives Capacity
Rail Kit
Security & TPM
Warranty
Power Supply Units
The request has been sent.
Thank you.
Price as Configured:
$0
Price as Configured:
$0
Cisco HyperFlex HX220c M5 Rack Server CTO
What's New
- (2) 2nd Gen Intel Xeon Scalabel Processors
- (24) DDR4 2933 MT/s DIMMs
- (10) SAS/SATA/NVMe Drives
- Slot 1 System SSD
- Slot 2 Caching SSD
- Slots 3 - 10 Persistent data drives
- (2) PCIe 3.0 slots
- Riser 1 FH,3/4 Length CPU 1 (x16 PCIe 3.0)
- Riser 2 HH,HL Length CPU 2 (x16 PCIe 3.0)
- (7) Fans
- (2) Power Supply Units
- (1) Dedicatred mLOM slot
- (1) Dedicated Internal SAS HBA Slot
- (1) M.2 Supported for Boot
- (1) Micro SD Slot on Riser 1
- (2) GPUs Supported
(Click to View)
Ideal for:
- Big Data Analytics
- Cloud Servers
- Database
- Hyper-Converged infrastructure (HCI)
- Virtual Desktop Interface (VDI)
- Virtualization
The Cisco HyperFlex HX220c M5 SX Node is a 1RU system tailored for hyper-converged infrastructure, ideal for workloads like virtualization, cloud computing, and databases. Powered by 2nd Gen Intel® Xeon® Scalable Processors with up to 28 cores per CPU and 24 DDR4 DIMM slots supporting up to 3 TB of memory, it delivers exceptional performance. Networking is enhanced with a modular LAN-on-motherboard (mLOM) and support for Cisco VIC 1387 Dual-Port 40Gb or VIC 1457 Quad-Port 10/25Gb adapters. Two PCIe slots offer additional expansion for network or storage needs, ensuring scalability and flexibility in demanding environments.

- Slot 01 (For HyperFlex System drive/Log drive)
- 1 x 2.5 inch SATA SSD
- Slot 02 (For Cache drive)
- 1 x 2.5 inch SATA SSD OR
- 1 x 2.5 inch SED SAS SSD
- Slot 03 through 10 (For Capacity drives)
- Up to 8 x 2.5 inch SAS HDD OR
- Up to 8 x 2.5 inch SED SAS HDD
SED Installed, You must select minimum of 6 'capacity' drives
SED Installed, All 'cache' and 'capacity' drives must be SED drives
NVMe need two Processors to support these drives.
HX-DC-no-FI does not support SED drives.

- Riser 1 (controlled by CPU 1):
- One full-height profile, 3/4-length slot with x24 connector and x16 lane.
- Riser 2 (controlled by CPU 2):
- One half-height profile, half-length slot with x24 connector and x16 lane
- NOTE: Use of PCIe riser 2 requires a dual CPU configuration.
- Dedicated SAS HBA slot
- An internal slot is reserved for use by the Cisco 12G SAS HBA
One slot for a micro-SD card on PCIe Riser 1 (Option 1 and 1B).
The node supports a modular LOM (mLOM) card to provide additional rear-panel connectivity, such as a Cisco VIC adapter. The horizontal mLOM socket is on the motherboard, under the mRAID riser.
The mLOM socket provides a Gen-3 x16 PCIe lane. The socket remains powered when the node is in 12 V standby power mode and it supports the network communications services interface (NCSI) protocol.
If the card you are replacing is a Cisco VIC 1457 (HX-MLOM-C25Q-04), note that this card requires Cisco HX 4.0(1a) or later.
Dedicated SAS HBA slot, reserved for use by the Cisco 12G SAS HBA
For hardware-based storage control, the node can use a SAS HBA that plugs into a horizontal socket on a dedicated mRAID riser (internal riser 3).
This Internal Riser 3 sit on top of the mLOM adapter.
The CVR-QSFP-SFP10G is inserted into a QSFP port on the Cisco VIC 1387 installed in the mLOM (Modular LAN-on-Motherboard) slot or PCIe slots of the HX220c M5 SX Node. It allows the server to connect to networks using 10Gb SFP+ transceivers instead of 40Gb QSFP optics, offering flexibility in environments where 10Gb connectivity is required.
The deployment mode in the Cisco HyperFlex HX220c M5 SX Node is critical as it defines how the server integrates into the network and manages workloads. The deployment mode determines whether the server operates with or without Cisco Fabric Interconnect (FI), impacting scalability, compatibility, and feature availability.
This deployment option connects the server to Cisco Fabric Interconnect. The installation for this type of deployment can be done using the standalone installer or from the Intersight. This deployment mode has been supported since launch of HyperFlex.
- Enhanced Management: Servers are connected to Cisco Fabric Interconnects, enabling centralized management of compute, storage, and networking resources.
- High Availability: Ensures seamless integration with Cisco UCS Manager for automated discovery, configuration, and lifecycle management.
- Compatibility: Supports advanced features like stretch clusters, PMem (Persistent Memory), SED (Self-Encrypting Drives), and additional PCIe Cisco VIC adapters.
HX Data Center without Fabric Interconnect | HX-DC-NO-FI
This deployment option allows server nodes to be directly connected to existing switches. The installation for this type of deployment can be done from the Intersight only
- Standalone Mode: Servers connect directly to existing switches, relying on Cisco Intersight for management.
- Simplicity: Ideal for environments with fewer integration requirements or edge deployments.
- Limitations: Features like PMem, SED drives, and stretch clusters are not supported, reducing some advanced capabilities.
- Height: 1.7 inches (43.2 mm)
- Width: 16.9 inches (429.6 mm)
- Depth: 29.8 inches (757.1 mm)
- Fully Configured: Up to 60 pounds (27.2 kg)
Current Configuration
(List below serves as quick links to each section)
Chassis Options
Processors
Devlopment Mode
Storage Controller/Boot
mLOM Options
PCIe Adapters VIC/NIC
GPU options
Drive Cache
Drives Capacity
Rail Kit
Security & TPM
Warranty
Power Supply Units
The request has been sent.
Thank you.
Price as Configured:
$0






































