Current Configuration
(List below serves as quick links to each section)
Chassis Options
Processors
Memory
Topology Mode
Topo5 Mode Supported Cards
Storage Controller/Boot
Hybrid Drives Front/Rear Capacity
Hybrid Drives Front Cache
Hybrid Drives Front System
ALL-FLASH Drives Front/Rear Capacity
ALL-FLASH Drives Front Cache
ALL-FLASH Drives Front System
Risers
GPU Options
Rail Kit
Security / TPM
ECS Warranty
Power Supply Units (PSU)
The request has been sent.
Thank you.
Price as Configured:
$0
Price as Configured:
$0
Cisco HyperFlex Edge HX240c M6 Rack Server CTO
What's New
- (2) 3rd Gen Intel Xeon Scalabel Processors
- (32) DDR4 3200 MT/s DIMMs
- (24) SAS/SATA HDD/SSD
- (4) Rear Drive Bays
- NVMe not Supported
- (8) PCIe 3.0 slots
- (6) Fans
- (2) Power Supply Units
- (1) Dedicatred mLOM slot
- (2) Dedicated Internal SAS HBA Slot
- (2) Dedicated M.2 Slot for Boot
- (2) GPUs Supported
HyperFlex Edge clusters can be configured in 2, 3 or 4 node configurations. Single node clusters and clusters larger than 4 nodes are not supported with HyperFlex Edge.
The Cisco HyperFlex HX240c Edge Server is purpose-built for remote offices, branch locations, and edge environments, delivering high-capacity storage and compute capabilities in a compact 2U form factor. Designed to simplify operations in distributed locations, the HX240c Edge Node supports hybrid storage configurations with up to 24 front-facing SAS/SATA drives and optional rear-facing drives, enabling flexibility for varying workloads.
It operates seamlessly without requiring Cisco UCS Fabric Interconnects, relying on top-of-rack Ethernet switches for streamlined deployment and management. Managed through Cisco Intersight, this node provides centralized, cloud-based oversight, making it an ideal choice for businesses seeking reliable and efficient edge computing solutions.
(Click to View)
Ideal for:
- Artificial Intelligence (AI)
- Big Data Analytics
- Cloud Servers
- Data Analytics
- Database
- Deep Learning (DL)
- Edge Servers
- High-Performance Computing (HPC)
- Hyper-Converged infrastructure (HCI)
- Machine Learning (ML)
- Network Function Virtualization (NFV)
- Video Analytics
- Video Streaming
- Virtualization
- Virtual Machines (VM)
Hybrid Edge HX240c M6 24x SAS/SATAHX-E-240C-M6SX
Up to 24 front SFF hard drives (HDDs) and solid state drives (SSDs). 24 Drives are used as below:
- Drive bays 1 – 4 support SAS/SATA HDD or SSD
- Drive bays 5 - 24 support SAS/SATA HDD ONLY
- (3 to 23) SAS/SATA HDD/SSD (for capacity)
- One SAS/SATA SSD (for caching)
- One SAS/SATA SSD (system drive for HXDP operations)
Up to 4 SFF SAS/SATA rear drives (Optional)
I/O Centric ConfigurationUp to 8 PCIe 3.0 Slots
Supports a boot-optimized RAID controller carrier that holds two SATA M.2 SSDs.
All Flash Edge HX240c 24x SSDHXAF-E-240C-M5SX
Up to 24 front SFF solid state drives (SSDs). 24 Drives are used as below:
- Drive bays 1 – 24 support SAS/SATA SSDs
- (3 to 23) SAS/SATA SSD (for capacity)
- One SAS/SATA SSD (for caching)
- One SAS/SATA SSD (system drive for HXDP operations)
Storage Centric Configuration
Up to 4 SFF SAS/SATA rear drives (Optional)
I/O Centric ConfigurationUp to 8 PCIe 3.0 Slots
All Flash means Only SSD Drives are supported.
NVMe drives are not supported
This HBA supports up to 16 SAS or SATA drives (HX-E-240-M6SX and HXAF-E-240-M6SX server has 24 front drives and 2 or 4 rear drives) operating at 3 Gbs, 6 Gbs, and 12Gbs. It supports JBOD or pass-through mode (not RAID) and plugs directly into the drive backplane. Two of these controllers are required to control 24 front drives and 2 or 4 rear drives.
- Supports up to 16 internal SAS HDDs and SAS/SATA SSDs
- Supports JBOD or pass-through mode
Cisco Boot optimized M.2 Raid controller | HX-M2-HWRAID
Order two identical M.2 SATA SSDs for the boot-optimized RAID controller. You cannot mix M.2 SATA SSD capacities.It is recommended that M.2 SATA SSDs be used as boot-only devices.
- CIMC/UCSM is supported for configuring of volumes and monitoring of the controller and installed SATA M.2 drives.
- The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not supported.
- Hot-plug replacement is not supported. The server must be powered off.
- The boot-optimized RAID controller is supported when the server is used as a compute node in HyperFlex configurations.
The Cisco HyperFlex Edge Network Topologies define the architecture for connecting the nodes within the HyperFlex HX240c M6 Edge cluster to the network, emphasizing scalability, flexibility, and reliability for edge deployments. These topologies support configurations using existing top-of-rack switches, with options for single-switch or dual-switch setups. Single-switch topologies are cost-effective and ideal for smaller environments, while dual-switch configurations enhance redundancy and fault tolerance, ensuring continuous operations in the event of a network failure. The flexibility to use 1GE, 10GE, or 25GE switching enables organizations to optimize bandwidth and performance based on their needs. These topologies are integral to the cluster's ability to provide seamless, high-performance hyperconverged infrastructure in edge environments, such as remote offices or branch locations, where infrastructure simplicity and reliability are paramount.
TOPO 2Selecting HX-E-TOPO2 will include the Intel i350 quad port PCIe NIC for 1GE topologies. Two ports on the NIC are used for HyperFlex functions. The remaining two ports may be used by applications after the HyperFlex deployment is completed.
- Included Card: Intel i350 quad port
- 1GbE Dual Switch Topology
- 1GbE Single Switch Topology
TOPO 3TOPO3, also referred to as the 1 Gigabit Ethernet (1GE) Single Switch Topology, is a configuration designed for smaller-scale Cisco HyperFlex Edge deployments where simplicity and cost efficiency are priorities.
- Included Card: None
- 1 Gigabit Ethernet Single Switch Topology
TOPO 4Cisco strongly recommends HX-E-TOPO4 for all new deployments
Selecting HX-E-TOPO4 will include the Cisco UCS 1467 quad port 10/25G SFP28 mLOM card (HX-M-V25-04) for 10/25GE topologies. Two ports on the 10GE are used for HyperFlex functions. The remaining two ports may be used by applications after the HyperFlex deployment is completed.
- Included Card: mLOM Cisco UCS 1467 4P 10/25G SFP28
- 10/25GbE Dual Switch Topology
- 10/25GbE Single Switch Topology

TOPO 5 | HX-E-TOPO5
Hyperflex NIC Connectivity Mode
Starting with HyperFlex 5.0(2a), the TOPO5 option is supported. Minimum 4 NIC Ports required, If NIC connectivity mode is selected, cannot select Riser1 HH X16 Slot or Riser2 HH X8 Slot Options.
- Approves Cards: View table below.
- 4P NIC - 10/25GbE Dual Switch Topology
- 2P NIC - 10/25GbE Dual Switch Topology
Click To View TOPO 5
| Product ID (PID) | Description |
|---|---|
| HyperFlex NIC Connectivity Mode | |
| R2 Slot 4 x8 PCIe NIC | |
| HX-PCIE-ID10GF | Intel X710 dual-port 10G SFP+ |
| HX-PCIE-IQ10GF | Intel X710 quad-port 10G SFP+ |
| HX-P-I8D25GF | Cisco-Intel E810XXVDA2 2x25/10 GbE SFP28 PCIe NIC |
| HX-P-I8Q25GF | Cisco-Intel E810XXVDA4L 4x25/10 GbE SFP28 PCIe NIC |
| R2 Slot 6 x8 PCIe NIC | |
| HX-PCIE-ID10GF | Intel X710 dual-port 10G SFP+ |
| HX-PCIE-IQ10GF | Intel X710 quad-port 10G SFP+ NIC |
| HX-P-I8D25GF | Cisco-Intel E810XXVDA2 2x25/10 GbE SFP28 PCIe NIC |
| HX-P-I8Q25GF | Cisco-Intel E810XXVDA4L 4x25/10 GbE SFP28 PCIe NIC |
I/O Centric Configuration- Riser 1A (I/O centric, CPU1)
- Riser 2A (I/O centric, CPU2)
- Riser 3A (I/O centric, CPU2)
The I/O centric version shows all PCIe slots
Storage Centric Configuration- Riser 1B (storage-centric)
- Riser 2A (I/O centric, CPU2)
- Riser 3B (storage-centric, CPU2)
The storage centric version shows a combination of PCIe risers and storage bays.
GPU Centric Configuration- Riser 1A (I/O Centric)
- Riser 2A (I/O centric, CPU2)
- Riser 3C (GPU Centric, CPU2)
GPUs supported in x16 Slots 2, 5 & 7.
Must install 2 CPUs for Riser 2 & 3.
Riser 1A: Default RiserHX-RIS1A-240M6
- Slot 1 x8 | FH-3/4 length, NCSI
- Slot 2 x16 | FH-FL GPU Card, NCSI
- Slot 3 x8 | FH-FL, x8
Controlled with CPU1
Riser 1B: Storage RiserHX-RIS1B-240M6
- Slot 1 | Reserved
- Slot 2 x4 | 2.5” drive bay 102
- Slot 3 x4 | 2.5” drive bay 101
Controlled with CPU1
Riser 2A: Default RiserUCSC-RIS2A-240M6
- Slot 4 x8 | FH-3/4 length, NCSI
- Slot 5 x16 | FH-FL GPU Card, NCSI
- Slot 6 x8 | FH-FL, x8
Controlled with CPU2
Riser 3A: Default RiserHX-RIS3A-240M6
- Slot 7 x8 | FH-FL GPU Card
- Slot 8 x8 | FH-FL GPU Card
Controlled with CPU2
Riser 3B: Storage RiserHX-RIS3B-240M6
- Slot 7 x4 | 2.5” drive bay 104
- Slot 8 x4 | 2.5” drive bay 103
Controlled with CPU2
Riser 3C: GPU RiserHX-RIS3C-240M6
- Slot 7 x16 | FH-FL, DW GPU
- Slot 8 blocked by double-wide GPU
Controlled with CPU2.
GPUs cannot be mixed.
Riser 1B does not accept GPUs.
Riser 3B does not accept GPUs.
When a GPU is ordered, the server comes with low-profile heatsinks PID (HX-HSLP-M6=) and need to select special air duct PID (HX-ADGPU-245M6=) for double-wide GPUs
| GPU Product ID (PID) | PID Description | Card Size | Max GPU per Node | Riser 1A (Gen 4) | Riser 1B (Gen 4) | Riser 2 (Gen 4) | Riser 3A (Gen 4) | Riser 3B (Gen 4) | Riser 3C |
|---|---|---|---|---|---|---|---|---|---|
| HX-GPU-A10 | TESLA A10, PASSIVE, 150W, 24GB | Single-wide | 5 | slot 2 & 3 | N/A | slot 5 & 6 | N/A | N/A | slot 7 |
| HX-GPU-A30 | TESLA A30, PASSIVE, 180W, 24GB | Double-wide | 3 | slot 2 | N/A | slot 5 | N/A | N/A | slot 7 |
| HX-GPU-A40 | TESLA A40 RTX, PASSIVE, 300W, 48GB | Double-wide | 3 | slot 2 | N/A | slot 5 | N/A | N/A | slot 7 |
| HX-GPU-A100-80 | TESLA A100, PASSIVE, 300W, 80GB | Double-wide | 3 | slot 2 | N/A | slot 5 | N/A | N/A | slot 7 |
| HX-GPU-A16 | NVIDIA A16 PCIE 250W 4X16GB | Double-wide | 3 | slot 2 | N/A | slot 5 | N/A | N/A | slot 7 |
Power supplies share a common electrical and physical design that allows for hot-plug and tool-less installation into M6 HX-Series servers. Each power supply is certified for high-efficiency operation and offer multiple power output options.
The 2300 W power supply uses a different power connector that the rest of the power supplies, so you must use different power cables to connect it.
- Height: 3.42 in. (8.7 cm)
- Width: 18.9 in.(48.0 cm)
- Length: 30 in. (76.2 cm)
- Weight:
- Min: 35.7 lbs (16.2 kg)g
- Max: 61.7 lbs (28 kg)
| Product ID (PID) | PID Description |
|---|---|
| PSU (Input High Line 210VAC) | |
| HX-PSU1-1050W | 1050W AC PSU Platinum (Not EU/UK Lot 9 Compliant) |
| HX-PSUV2-1050DC | 1050W -48V DC Power Supply for Rack Server |
| HX-PSU1-1600W | 1600W AC PSU Platinum (Not EU/UK Lot 9 Compliant) |
| HX-PSU1-2300W1 | 2300W AC Power Supply for Rack Servers Titanium |
| PSU (Input Low Line 110VAC) | |
| HX-PSU1-1050W | 1050W AC PSU Platinum (Not EU/UK Lot 9 Compliant) |
| HX-PSUV2-1050DC | 1050W -48V DC Power Supply for Rack Server |
| HX-PSU1-2300W | 2300W AC Power Supply for Rack Servers Titanium |
| HX-PSU1-1050ELV | 1050W AC PSU Enhanced Low Line (Not EU/UK Lot 9 Compliant) |
Current Configuration
(List below serves as quick links to each section)
Chassis Options
Processors
Memory
Topology Mode
Topo5 Mode Supported Cards
Storage Controller/Boot
Hybrid Drives Front/Rear Capacity
Hybrid Drives Front Cache
Hybrid Drives Front System
ALL-FLASH Drives Front/Rear Capacity
ALL-FLASH Drives Front Cache
ALL-FLASH Drives Front System
Risers
GPU Options
Rail Kit
Security / TPM
ECS Warranty
Power Supply Units (PSU)
The request has been sent.
Thank you.
Price as Configured:
$0





































