Current Configuration

(List below serves as quick links to each section)

Chassis Option

Processors

Memory

Devlopment Mode

Storage Controller/Boot Option

Drives Capacity

Drives Cache

mLOM Options

Riser Card Options

Virtual Interface Card (VIC)

GPU Options

Rail Kit

Security

ECS Warranty

Power Supply Units

The request has been sent.
Thank you.

Price as Configured:
$0

Add to Cart

Error Submitting Cart. Please get in touch with us for support.

Please select all the mandatory items in the configuration list before submitting the cart.

One or more items in your cart has no price. This will be submitted as a "Request for a Quote".

Cisco HyperFlex HX240c M5 Rack Server CTO

Cisco

| CTO Servers |

UCSH-HX240C-M5-CTO

 

READ ME

  • Some options may not be displayed until the compatible parent option is chosen ie. Chassis – Drives, Processor – RAM etc.
  • “Quote” items can be added to your cart and sent to us via the cart page
  • Click the blue bar to close/open a section from view after choosing your options

Chassis Option

Read below to know more about each Chassis Option

Processors

2nd Gen Intel Scalable CPU

Supports up to (2) Processors

Memory

Supports up to (2) DIMMs

Devlopment Mode

Please read below for information about these options.
Data center deployment mode without fabric interconnect (HX-DC-no-FI) does not support SED drives.

Storage Controller/Boot Option (Optional)

There are dedicated slots for both the Boot and the Controller

Drives Capacity (Optional)

Please read below to know what is allowed as a Capacity Drive and where they are located.

Drives Cache (Optional)

Please read below to know what is allowed as a Capacity Drive and where they are located.

mLOM Options (Optional)

HX-MLOM-C40Q-03, HX-MLOM-C25Q-04 require HXDP 4.5(2c) or higher for data center deployment mode without fabric interconnect (DC-no-FI).
The Convert below takes the 40Gb QSFP+ and allows it to be accessed with a 10Gb SFP cable.

Riser Card Options (Optional)

Only (1) Riser 1 and (1) Riser 2
The micro-SD card mounts internally on Riser 1
NVMe Ports come from the Riser B. Must have 2 CPUs installed.

Virtual Interface Card (VIC) (Optional)

GPU Options (Optional)

Only T4 supported with 1 CPU, max 3 with HX-RIS-1B-240M5, Riser 1B 3PCIe slots (x8, x8, x8); all from CPU1, For T4

Rail Kit (Optional)

Security (Optional)

ECS Warranty (Optional)

Power Supply Units

Supports up to (2)
power supplies must be identical.

Helpful Tip: Once desired configuration is selected click "Add to Cart".
From the cart page you can submit a submit a quote request for best pricing

What's New

  • (2) 2nd Gen Intel Xeon Scalabel Processors
  • (24) DDR4 2933 MT/s DIMMs
  • (12 - 24) SAS/SATA/NVMe Drives
  • (2) Rear Drive Bays
  • (6) PCIe 3.0 slots
  • (6) Fans
  • (2) Power Supply Units
  • (1) Dedicatred mLOM slot
  • (1) Dedicated Internal SAS HBA Slot
  • (1) M.2 Supported for Boot
  • (1) Micro SD Slot on Riser 1
  • Up to (6) GPUs Supported

The Cisco HyperFlex HX240c M5 server, in its 12 LFF or 24-bay small form factor (SFF) configuration, is designed for high-performance, storage-intensive workloads. Powered by Intel® Xeon® Scalable processors, it supports up to two CPUs and 3 TB of DDR4 memory across 24 DIMM slots, delivering exceptional computational and memory performance.

This 2U chassis accommodates up to 24 hot-swappable SFF drives, offering flexible storage configurations, including all-NVMe, all-flash, or hybrid SSD/HDD setups. With six PCIe expansion slots, the server supports additional GPUs or network cards, enhancing its adaptability for virtualization, AI, and data-intensive applications.


Ideal for:

  • Artificial Intelligence (AI)
  • Big Data Analytics
  • Cloud Servers
  • Corporate IT
  • Data Analytics
  • Database
  • Deep Learning (DL)
  • Edge Servers
  • High-Performance Computing (HPC)
  • Hyper-Converged infrastructure (HCI)
  • Language Model
  • Machine Learning (ML)
  • Network Function Virtualization (NFV)
  • Virtualization
  • Virtual Machines (VM)
Chassis Options
Mixing Server Rules

Mixing HX240c Hybrid SED HyperFlex nodes with HX240c All-Flash SED HyperFlex nodes within the same HyperFlex cluster is not supported.

NVMe Drive Rules

NVMe SSDs are supported only in the Caching SSD position, in drive bay 13 for LFF versions or bay 25 for SFF versions. NVMe SSDs are not supported for persistent storage or as the Housekeeping drive.


Housekeeping Drive Rules
  • LFF versions: The Housekeeping SSD must be installed in rear bay 14.
  • SFF versions: The Housekeeping SSD must be installed in front bay 1.
Persistent Drive Rules
  • LFF versions: The persistent data drives must be installed in front bays 1 - 12
  • SFF versions: The persistent data drives must be installed in front bays 2 - 24
Caching Drive Rules
  • LFF versions: The Caching SSD must be installed in rear bay 13.
  • SFF versions: The Caching SSD must be installed in rear bay 25.
Deployment Mode

The deployment mode in the Cisco HyperFlex HX240c M5 SX Node is critical as it defines how the server integrates into the network and manages workloads. The deployment mode determines whether the server operates with or without Cisco Fabric Interconnect (FI), impacting scalability, compatibility, and feature availability.


HX Data Center with Fabric Interconnect (FI) | HX-DC-FI

This deployment option connects the server to Cisco Fabric Interconnect. The installation for this type of deployment can be done using the standalone installer or from the Intersight. This deployment mode has been supported since launch of HyperFlex.

  • Enhanced Management: Servers are connected to Cisco Fabric Interconnects, enabling centralized management of compute, storage, and networking resources.
  • High Availability: Ensures seamless integration with Cisco UCS Manager for automated discovery, configuration, and lifecycle management.
  • Compatibility: Supports advanced features like stretch clusters, PMem (Persistent Memory), SED (Self-Encrypting Drives), and additional PCIe Cisco VIC adapters.

HX Data Center without Fabric Interconnect | HX-DC-NO-FI

This deployment option allows server nodes to be directly connected to existing switches. The installation for this type of deployment can be done from the Intersight only

  • Standalone Mode: Servers connect directly to existing switches, relying on Cisco Intersight for management.
  • Simplicity: Ideal for environments with fewer integration requirements or edge deployments.
  • Limitations: Features like PMem, SED drives, and stretch clusters are not supported, reducing some advanced capabilities.
Controllers & Networking Options
Dedicated mLOM

The mLOM on the HX240c 2U Node server from Cisco has its own dedicated slot. This is ideal way to connect to the Cisco FI on the network. If you selected the HX-DC-no-FI these options are not supported.

HX-PCIE-C40Q-03 (40G VICs), HX-PCIE-C25Q-04 and HX-PCIE-OFFLOAD-1
The standard PCIe card offerings is:
  • Modular LAN on Motherboard (mLOM)
  • Virtual Interface Card (VIC)
  • Network Interface Card (NIC)

The mLOM VIC 1387 has a 40Gb QSFP port but can be converted to a 10Gb SFP with the CVR-QSFP-SFP10G. This is a prefect solution when you need to provide a 10Gb connection to the Cisco FL 6200 series.

SAS HBA Controller

An internal slot is reserved for the Cisco 12G SAS HBA (HX-SAS-M5HD). This HBA is managed by the Cisco Integrated Management Controller (CIMC).

Supports JBOD mode only (no RAID functionality. Ideal for SDS (Software Defined Storage) applications. It is also ideal for environments demanding the highest IOPs (for external SSD attach), where a RAID controller can be an I/O bottleneck.

Supports up to 26 internal SAS HDDs and SAS/SATA SSDs

PCIe Riser Options

Must install 2 CPUs for GPU support and for Riser 2 to be supported.

Cisco UCS HX240c M5 Rack Server Riser 2A: HX-R2A-A100-M5 Riser 2A: HX-R2A-A100-M5
  • 6 Full Full x8
  • 5 Full Full x16 - GPU
  • 4 Full Half x16

Cisco UCS HX240c M5 Rack Server Riser 2B: HX-RIS-2B-240M5 Riser 2B: HX-RIS-2B-240M5
  • 6 Full Full x8
  • 5 Full Full x16 - GPU T4, RTX
  • 4 Full Half x8

PCIe cable connectors for rear NVMe SSDs

GPU Options

GPUs cannot be mixed.

Must install 2 CPUs for GPU support and for Riser 2 to be supported. All GPU cards require two CPUs and a minimum of two power supplies in the server. 1600 W power supplies are recommended.

Only T4 supported with 1 CPU, max 3 with HX-RIS-1B-240M5, Riser 1B 3PCIe slots (x8, x8, x8); all from CPU1, For T4

HX-GPU-T4-16 require special riser cards (HX-RIS-1-240M5 and HX-RIS-2B-240M5) for full configuration of 5 or 6 cards.

NVIDIA M10 GPUs can support only less than 1 TB of total memory in the server. Do not install more than fourteen 64-GB DIMMs when using an NVIDIA GPU card in this server

Product ID (PID) PID Description Card Height Maximum Cards Per Node
HX-GPU-M10 NVIDIA M10 Double wide (consumes 2 slots) 2
HX-GPU-T4-16 NVIDIA T4 PCIE 75W 16GB Low Profile Single-Width 6
HX-GPU-RTX6000 NVIDIA QUADRO RTX 6000, PASSIVE, 250W TGP, 24GB Double Wide (consumes 2 slots) 2
HX-GPU-RTX8000 NVIDIA QUADRO RTX 8000, PASSIVE, 250W TGP, 48GB Double Wide (consumes 2 slots) 2
Power & Dimensions

Each power supply is certified for high-efficiency operation and offer multiple power output options. This allows users to "right-size" based on server configuration, which improves power efficiency, lower overall energy costs and avoid stranded capacity in the data center.

All GPU cards require two CPUs and a minimum of two power supplies in the server. 1600 W power supplies are recommended.


Product ID (PID) PID Description
HX-PSU1-1050W 1050W AC power supply for C-Series servers
HX-PSUV2-1050DC 1050W DC Power Supply for C-Series servers
HX-PSU1-1600W 1600W AC power supply for C-Series servers
HX-PSU1-1050EL Cisco UCS 1050W AC Power Supply for Rack Server Low Line
Dimensions
  • Height: 3.43in (87.1mm)
  • Width: 18.96in (481.5mm)
  • Length: 30.44in (773.1mm
  • Weight:
    • Bare: 8 Bay (35.5lbs/16.1kg) | 24 Bay (40lbs/18.1kg)
    • Min: 8 Bay (37lbs/16.8kg) | 24 Bay (41.5lbs/18.8kg)
    • Max: 8 Bay (45.5lbs/20.6kg) | 24 Bay (59.5lbs/226.1kg)