Current Configuration

(List below serves as quick links to each section)

Chassis Options

Processors

Memory

Topology Mode

Topo5 Mode Supported Cards

Storage Controller/Boot

Hybrid Drives Front/Rear Capacity

Hybrid Drives Front Cache

Hybrid Drives Front System

ALL-FLASH Drives Front/Rear Capacity

ALL-FLASH Drives Front Cache

ALL-FLASH Drives Front System

Risers

GPU Options

Rail Kit

Security / TPM

ECS Warranty

Power Supply Units (PSU)

The request has been sent.
Thank you.

Price as Configured:
$0

Error Submitting Cart. Please get in touch with us for support.

Please select all the mandatory items in the configuration list before submitting the cart.

One or more items in your cart has no price. This will be submitted as a "Request for a Quote".

Cisco HyperFlex Edge HX240c M6 Rack Server CTO

Cisco

| CTO Servers |

UCSH-HX-EDGE-M6-MLB-CTO

 

READ ME

  • Some options may not be displayed until the compatible parent option is chosen ie. Chassis – Drives, Processor – RAM etc.
  • “Quote” items can be added to your cart and sent to us via the cart page
  • Click the blue bar to close/open a section from view after choosing your options

Chassis Options

Hybrid holds SAS/SATA HDD & SSD.
The All Flash only supports SSDs

Processors

3rd Gen Intel Xeon Scalable

Supports up to (2) Processors

Memory

Max 16 DIMMs per Socket

Topology Mode

Topologies Must be selected and they control the networking configuration for your Cluster.
Please read below to make sure you have chosen the correct Topology for your environment.

Topo5 Mode Supported Cards (Optional)

If the TOPO5 is selcted you need to provide 4 ports. either 2 Dual port cards or 1 Quad Port card.

Storage Controller/Boot (Optional)

There are dedicated slots for both of these Cards.
The SAS HBA supports up to 16 Drives. You must install (2) Cards to support all the front/rear drive bays.
The M.2 Boot Controller supports (2) M.2 drives for Boot.

Hybrid Drives Front/Rear Capacity (Optional)

These drives are supported only in the Hybrid Chassis for the Capacity Slot drives.
Please read the manual below for more details into each drive section.

Hybrid Drives Front Cache (Optional)

These drives are supported only in the Hybrid Chassis for the Cache Drive Slot.
Please read the manual below for more details about each drive section.

Hybrid Drives Front System (Optional)

These drives are supported only in the Hybrid Chassis for the System Drive Slot.
Please read the manual below for more details about each drive section.

ALL-FLASH Drives Front/Rear Capacity (Optional)

These drives are supported only in the ALL-FLASH Chassis for the Capacity Slot drives.
Please read the manual below for more details about each drive section.

ALL-FLASH Drives Front Cache (Optional)

These drives are supported only in the ALL-FLASH Chassis for the Cache Drive Slot.
Please read the manual below for more details about each drive section.

ALL-FLASH Drives Front System (Optional)

These drives are supported only in the ALL-FLASH Chassis for the System Drive Slot.
Please read the manual below for more details about each drive section.

Risers

The system has three Riser Options.
I/O Centric provides the most amount of PCIe slots
Storage Centric provides up to (4) rear drive slots
GPU Centric provides access to GPU supported slots.

GPU Options

If the GPU Centric Riser option was selected. Below are the supported GPUs.
When a GPU is ordered, the server comes with low-profile heatsinks PID (HX-HSLP-M6=) and the select special air duct PID (HX-ADGPU-245M6=) for double-wide GPUs.
256 GB DIMMs cannot be combined with GPU cards
Do not mix GPUs
For more details please read the description or the manual below.

Rail Kit (Optional)

Security / TPM (Optional)

ECS Warranty

Power Supply Units (PSU)

Helpful Tip: Once desired configuration is selected click "Add to Cart".
From the cart page you can submit a submit a quote request for best pricing

What's New

  • (2) 3rd Gen Intel Xeon Scalabel Processors
  • (32) DDR4 3200 MT/s DIMMs
  • (24) SAS/SATA HDD/SSD
  • (4) Rear Drive Bays
  • NVMe not Supported
  • (8) PCIe 3.0 slots
  • (6) Fans
  • (2) Power Supply Units
  • (1) Dedicatred mLOM slot
  • (2) Dedicated Internal SAS HBA Slot
  • (2) Dedicated M.2 Slot for Boot
  • (2) GPUs Supported

HyperFlex Edge clusters can be configured in 2, 3 or 4 node configurations. Single node clusters and clusters larger than 4 nodes are not supported with HyperFlex Edge.


The Cisco HyperFlex HX240c Edge Server is purpose-built for remote offices, branch locations, and edge environments, delivering high-capacity storage and compute capabilities in a compact 2U form factor. Designed to simplify operations in distributed locations, the HX240c Edge Node supports hybrid storage configurations with up to 24 front-facing SAS/SATA drives and optional rear-facing drives, enabling flexibility for varying workloads.

It operates seamlessly without requiring Cisco UCS Fabric Interconnects, relying on top-of-rack Ethernet switches for streamlined deployment and management. Managed through Cisco Intersight, this node provides centralized, cloud-based oversight, making it an ideal choice for businesses seeking reliable and efficient edge computing solutions.


Ideal for:

  • Artificial Intelligence (AI)
  • Big Data Analytics
  • Cloud Servers
  • Data Analytics
  • Database
  • Deep Learning (DL)
  • Edge Servers
  • High-Performance Computing (HPC)
  • Hyper-Converged infrastructure (HCI)
  • Machine Learning (ML)
  • Network Function Virtualization (NFV)
  • Video Analytics
  • Video Streaming
  • Virtualization
  • Virtual Machines (VM)
Chassis Options
Cisco UCS HX240c M5 Rack Server 24 SFF Hybrid Edge HX240c M6 24x SAS/SATA
HX-E-240C-M6SX

Up to 24 front SFF hard drives (HDDs) and solid state drives (SSDs). 24 Drives are used as below:

  • Drive bays 1 – 4 support SAS/SATA HDD or SSD
  • Drive bays 5 - 24 support SAS/SATA HDD ONLY
  • (3 to 23) SAS/SATA HDD/SSD (for capacity)
  • One SAS/SATA SSD (for caching)
  • One SAS/SATA SSD (system drive for HXDP operations)
Storage Centric Configuration

Up to 4 SFF SAS/SATA rear drives (Optional)

I/O Centric Configuration

Up to 8 PCIe 3.0 Slots

Supports a boot-optimized RAID controller carrier that holds two SATA M.2 SSDs.

Cisco UCS Hybrid Edge HX240c All Flash Rack Server 24 SSD All Flash Edge HX240c 24x SSD
HXAF-E-240C-M5SX

Up to 24 front SFF solid state drives (SSDs). 24 Drives are used as below:

  • Drive bays 1 – 24 support SAS/SATA SSDs
  • (3 to 23) SAS/SATA SSD (for capacity)
  • One SAS/SATA SSD (for caching)
  • One SAS/SATA SSD (system drive for HXDP operations)

Storage Centric Configuration

Up to 4 SFF SAS/SATA rear drives (Optional)

I/O Centric Configuration

Up to 8 PCIe 3.0 Slots

All Flash means Only SSD Drives are supported.
NVMe drives are not supported

Drive Controller / Boot Options
Cisco 12G SAS HBA | HX-SAS-240M6

This HBA supports up to 16 SAS or SATA drives (HX-E-240-M6SX and HXAF-E-240-M6SX server has 24 front drives and 2 or 4 rear drives) operating at 3 Gbs, 6 Gbs, and 12Gbs. It supports JBOD or pass-through mode (not RAID) and plugs directly into the drive backplane. Two of these controllers are required to control 24 front drives and 2 or 4 rear drives.

  • Supports up to 16 internal SAS HDDs and SAS/SATA SSDs
  • Supports JBOD or pass-through mode

Cisco Boot optimized M.2 Raid controller | HX-M2-HWRAID

Order two identical M.2 SATA SSDs for the boot-optimized RAID controller. You cannot mix M.2 SATA SSD capacities.It is recommended that M.2 SATA SSDs be used as boot-only devices.

  • CIMC/UCSM is supported for configuring of volumes and monitoring of the controller and installed SATA M.2 drives.
  • The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not supported.
  • Hot-plug replacement is not supported. The server must be powered off.
  • The boot-optimized RAID controller is supported when the server is used as a compute node in HyperFlex configurations.
Topologies / Networking

The Cisco HyperFlex Edge Network Topologies define the architecture for connecting the nodes within the HyperFlex HX240c M6 Edge cluster to the network, emphasizing scalability, flexibility, and reliability for edge deployments. These topologies support configurations using existing top-of-rack switches, with options for single-switch or dual-switch setups. Single-switch topologies are cost-effective and ideal for smaller environments, while dual-switch configurations enhance redundancy and fault tolerance, ensuring continuous operations in the event of a network failure. The flexibility to use 1GE, 10GE, or 25GE switching enables organizations to optimize bandwidth and performance based on their needs. These topologies are integral to the cluster's ability to provide seamless, high-performance hyperconverged infrastructure in edge environments, such as remote offices or branch locations, where infrastructure simplicity and reliability are paramount.



Cisco HyperFlex Edge Hybrid HX240c M6 TOPO 5 Hyperflex NIC Connectivity Mode | HX-E-TOPO5

TOPO 5 | HX-E-TOPO5
Hyperflex NIC Connectivity Mode

Starting with HyperFlex 5.0(2a), the TOPO5 option is supported. Minimum 4 NIC Ports required, If NIC connectivity mode is selected, cannot select Riser1 HH X16 Slot or Riser2 HH X8 Slot Options.

  • Approves Cards: View table below.
  • 4P NIC - 10/25GbE Dual Switch Topology
  • 2P NIC - 10/25GbE Dual Switch Topology

Click To View TOPO 5


Product ID (PID) Description
HyperFlex NIC Connectivity Mode
R2 Slot 4 x8 PCIe NIC
HX-PCIE-ID10GF Intel X710 dual-port 10G SFP+
HX-PCIE-IQ10GF Intel X710 quad-port 10G SFP+
HX-P-I8D25GF Cisco-Intel E810XXVDA2 2x25/10 GbE SFP28 PCIe NIC
HX-P-I8Q25GF Cisco-Intel E810XXVDA4L 4x25/10 GbE SFP28 PCIe NIC
R2 Slot 6 x8 PCIe NIC
HX-PCIE-ID10GF Intel X710 dual-port 10G SFP+
HX-PCIE-IQ10GF Intel X710 quad-port 10G SFP+ NIC
HX-P-I8D25GF Cisco-Intel E810XXVDA2 2x25/10 GbE SFP28 PCIe NIC
HX-P-I8Q25GF Cisco-Intel E810XXVDA4L 4x25/10 GbE SFP28 PCIe NIC
PCIe Riser Options

Must install 2 CPUs for Riser 2 & 3.

GPU Options

GPUs cannot be mixed.

Riser 1B does not accept GPUs.

Riser 3B does not accept GPUs.

When a GPU is ordered, the server comes with low-profile heatsinks PID (HX-HSLP-M6=) and need to select special air duct PID (HX-ADGPU-245M6=) for double-wide GPUs


GPU Product ID (PID) PID Description Card Size Max GPU per Node Riser 1A (Gen 4) Riser 1B (Gen 4) Riser 2 (Gen 4) Riser 3A (Gen 4) Riser 3B (Gen 4) Riser 3C
HX-GPU-A10 TESLA A10, PASSIVE, 150W, 24GB Single-wide 5 slot 2 & 3 N/A slot 5 & 6 N/A N/A slot 7
HX-GPU-A30 TESLA A30, PASSIVE, 180W, 24GB Double-wide 3 slot 2 N/A slot 5 N/A N/A slot 7
HX-GPU-A40 TESLA A40 RTX, PASSIVE, 300W, 48GB Double-wide 3 slot 2 N/A slot 5 N/A N/A slot 7
HX-GPU-A100-80 TESLA A100, PASSIVE, 300W, 80GB Double-wide 3 slot 2 N/A slot 5 N/A N/A slot 7
HX-GPU-A16 NVIDIA A16 PCIE 250W 4X16GB Double-wide 3 slot 2 N/A slot 5 N/A N/A slot 7
Power & Dimensions

Power supplies share a common electrical and physical design that allows for hot-plug and tool-less installation into M6 HX-Series servers. Each power supply is certified for high-efficiency operation and offer multiple power output options.


The 2300 W power supply uses a different power connector that the rest of the power supplies, so you must use different power cables to connect it.

Dimensions
  • Height: 3.42 in. (8.7 cm)
  • Width: 18.9 in.(48.0 cm)
  • Length: 30 in. (76.2 cm)
  • Weight:
    • Min: 35.7 lbs (16.2 kg)g
    • Max: 61.7 lbs (28 kg)
Product ID (PID) PID Description
PSU (Input High Line 210VAC)
HX-PSU1-1050W 1050W AC PSU Platinum (Not EU/UK Lot 9 Compliant)
HX-PSUV2-1050DC 1050W -48V DC Power Supply for Rack Server
HX-PSU1-1600W 1600W AC PSU Platinum (Not EU/UK Lot 9 Compliant)
HX-PSU1-2300W1 2300W AC Power Supply for Rack Servers Titanium
PSU (Input Low Line 110VAC)
HX-PSU1-1050W 1050W AC PSU Platinum (Not EU/UK Lot 9 Compliant)
HX-PSUV2-1050DC 1050W -48V DC Power Supply for Rack Server
HX-PSU1-2300W 2300W AC Power Supply for Rack Servers Titanium
HX-PSU1-1050ELV 1050W AC PSU Enhanced Low Line (Not EU/UK Lot 9 Compliant)