Current Configuration
(List below serves as quick links to each section)
Server Chassis
Select Processor
Memory
RAID Controllers
Boot Option | Boot Optimized Storage Subsystem (BOSS)
Boot Option | Internal Dual SD Module (IDSDM)
Mezzanine Card for Fabric A
Mezzanine Card for Fabric B
Mini-Mezzanine Card for Fabric C
Select Drive | SATA
Select Drive | SAS
Select Drive | NVMe SSD PCIe Gen 3
iDRAC Remote Options
Trusted Platform Module (TPM)
ECS Warranty
Quick Specs
Double Width Blade Server
Processor | Max: 28 Cores2 or 4 x 2nd Generation Intel Xeon Scalable
Memory | Max: 3TB (RDIMM)/6TB (LRDIMM)24 or 48 DDR4 RDIMM/LRDIMM 2933Mhz
24 Intel® OptaneTM DC persistent memory PMem Max: 15.36TB
12 NVDIMM-N 2933Mhz Max: 192GB
8 SFF SAS/SATA (HDD/SSD) NVMe SSD
Storage Controller:SW 140 (SW RAID)
PERC HBA330 MX, H730P MX, H745P MX, HBA330 MX Mini-Mezzanine
BOSS-S1
IDSDM
USB
4 PCIe 3.0 x 16 mezzanine slots (Fabric A and B)
2 PCIe 3.0 x 16 mini-mezzanine slots (Fabric C)
iDRAC9 with Lifecycle Controller
vFlash
- Boot options include BOSS-S1 or IDSDM
- Four PCIe mezzanine card slots for connecting to network Fabric A and B
- Two PCIe mini-mezzanine card slots for connecting to storage Fabric C
- iDRAC9 with Lifecycle Controller
- Dell PowerEdge MX7000 Modular CTO Enclosure Chassis
-
Dell PowerEdge MX740c CTO Compute Sled Up to 8
- Dell PowerEdge MX750c CTO Compute Sled Up to 8
- Dell PowerEdge MX760c CTO Compute Sled Up to 8
- Dell PowerEdge MX840c CTO Compute Sled Up to 4
- Dell PowerEdge MX5016s CTO Storage Sled Up to 7 Requires a minimum of 1 compute node in a chassis with storage sleds

Each Node is a separate CTO you must configure.
The PowerEdge MX840c is a powerful four-socket, full-height, double-width server that features dense compute, exceptionally large memory capacity, and a highly expandable storage subsystem. It is the ultimate scale-up server that excels at running a wide range of database applications, substantial virtualization, and software-defined storage environments. The MX7000 chassis supports up to four MX840c compute sleds.
Dell PowerEdge MX840c CTO Compute Sled
Powerful and flexible, four-socket double-width compute sled powered by up to four 28-core 2nd Generation Intel Xeon Scalable processors. PowerEdge MX840c features a large memory footprint with up to 48 DDR4 DIMMs slots. It supports up to eight 2.5-inch SAS/SATA (HDD/SDD) or Express Flash NVMe PCIe SSD drives. The PowerEdge MX7000 chassis supports up to four MX840c servers.
- Exceptionally broad memory configuration capacities of up to 3TB (RDIMM) or 6.1TB (LRDIMM); support up to 192GB NVDIMM for large in-memory applications
- Supports up to 24 slots for Intel OptaneTM DC persistent memory DCPMM (12.2TB) with a maximum total capacity of 15.36TB per server
- Choice of optional M.2 Boot Optimized Storage Solution (BOSS) or Internal Dual microSD Module (IDSDM), to streamline operating system and data storage
- Option of Express Flash NVMe PCIe SSD for high performing direct attached storage requirements
Simplify management and intelligently automate
All PowerEdge server feature the embedded, agent-free integrated Dell Remote Access Controller (iDRAC) 9 with Lifecycle Controller (LC) to simplify server systems management. Dell EMC OpenManage Enterprise – Modular Edition (OME-Modular) provides comprehensive in-system management within the PowerEdge MX7000 server chassis. Delivering key abilities of OpenManage Enterprise systems management, OME-Modular unified web/RESTful API interface manages the entire PowerEdge MX environment, including computer, storage and networking.
- Deploy, monitor, manage, configure, update, troubleshoot and remediate Dell EMC servers from any location, regardless of operating system or hypervisor presence or state.
- Speed systems management performance with enhanced iDRAC9 processor, up to four times better performance over the prior generation.
Set the foundation for the modern data center
Designed for Dell EMC’s PowerEdge MX kinetic infrastructure ecosystem, the PowerEdge MX840c server, with dense compute, exceptionally large memory capacity and highly expandable storage subsystem, delivers the flexibility and agility needed in today’s demanding, shared-resource environments.
- Double-width; up to four sleds per PowerEdge MX7000 chassis.
- Two or four, up to 28-core second-generation Intel Xeon Scalable processors per sled, up to 448 cores per chassis.
- Up to 48 DDR4 DIMMs slots, 6 TB max memory and speeds up to 2933MT/s; up to 192 DIMMS per chassis for up to 24.5TB max memory.
- Supports up to 24 DIMM slots for Intel Optane DC persistent memory DCPMM (12.2TB) with a maximum total capacity of 15.36TB per server.
- Up to 8x2.5" drive bays for SAS/SATA (HDD/SDD) and NVMe PCIe SSD support plus optional M.2 boot.
Dynamically configure for optimal workload performance and efficiency
PowerEdge MX architecture disaggregates and granularly reassigns resources, maximizing utilization, minimizing overprovisioning and stranded infrastructure. PowerEdge MX840c fully configurable, no compromise compute maximizes scalability to enhance workload performance and optimization.
- Scale compute resources with second-generation Intel Xeon Scalable processors and tailor performance based on your unique workload requirements.
- Flexible memory configurations from capacities of 8GB to 6TB; RDIMMs and LRDIMMs for reduced memory loading and greater density.
- Large, high-performance onboard storage footprint with double the HDD drives per compute sled.
How do you get 4 Processors in a single blade? By making it a double-wide Blade with a Stacked System board. This is accomplished by installing the Processor Expansion Module (PEM).
The MX840c Dell Double Wide Blade has 4 Processors. This is accomplished with the use of the PEM board. This PEM Board holds not only the 3rd and 4th Processors, but the 24 extra DIMMs, 2 Mezzanine slots, and 1 extra Fab C Slots.
When the Processor Expansion Module (PEM) is installed, you will have full use of the system to accommodate any drive and DIMM configuration you desire.
The system can support 1 Mini-Mezz and 1 Internal PERC or is can handle 1 Jumbo PERC which will do the work of both the Mini-Mezz and the Internal PERC. This Fabric C slot is how you are able to connect the MX5016s Storage Array to this compute sled.
The Mini-Mezzanine is install in the rear of the system near the Fab A and B Mezzanine Cards. This Mini Mezzanine gets connect to the Fab C in the MX7000 Enclosure. Only certain cards are supported here.
This Internal PERC Module is considered the MX version of the PERC cards. This card controls the internal drives on this compute sled. There is a dedicated slot for this PERC near the front of the system. This is not supported if a JUMBO PERC is installed. They use the same SCREW to connect to the chassis.
The Jumbo PERC is very unique it does the action of both the Mini-Mezzanin card and the Internal PERC. Not only can this control the internal drives but with also controls the External Storage Arrays that are connected to the Fab C in the back of the MX7000 Enclosure. This is one way to connect to the MX5016s Storage Array. If this is installed, the system does not support the Mini-Mezzanine or the Internal PERC.
The MX740c has the option to install the OS with the BOSS Module. If you're trying to boot the Hypervisor you can install the IDSDM. These two cards can't be installed together. They take up the same slot. Likewise, you are able to manage the iDRAC Remote Management with the vFlash Module.
This iDRAC vFlash Module lets you install and manage the system remotely. You are also able to easily upgrade to the enterprise software with a Micro SD vFlash card that controls this all.
The Internal Dual SD Module supported up to 2 Micro SD cards for redundancy when loading the Hypervisor OS system. This is not suitable for loading a full OS. The Boss card will do that. The IDSDM and the BOSS card can not be installed together.
This Boot Optimized Sub System is ideal for loading the OS on to any server. It saves a drive from being allocated for this job and does not take up any PCIe slots. The BOSS and IDSDM can not be installed together.
Current Configuration
(List below serves as quick links to each section)



































