Dell PowerEdge R940 — Full Specs Breakdown & Buyer's Guide

Dell PowerEdge R940

14th Gen 4-Socket 3U Rack Server — Quad 2nd Gen Intel® Xeon® Scalable · Up to 112 Cores · 48 DIMM Slots · Up to 15.36 TB Memory · 13 PCIe Gen 3 Slots · iDRAC9

Gen 14 3U Rack Quad Socket 2nd Gen Xeon Scalable Up to 112 Cores 48 DIMM Slots Up to 6 TB DDR4 Up to 15.36 TB w/ DCPMM 13 PCIe Gen 3 Slots Up to 12 NVMe Drives iDRAC9

The 4-Socket 3U Platform for SAP HANA Scale-Up, Data Warehousing, Mission-Critical Databases, and HPC

  • SAP HANA Scale-Up and In-Memory Databases — Four sockets with 48 DIMM slots and Intel Optane DC Persistent Memory push total addressable memory to 15.36 TB in a single 3U chassis; large-scale SAP HANA, Oracle Database In-Memory, and SQL Server In-Memory OLTP instances that require multi-terabyte in-memory datasets fit entirely within one R940 node without scale-out fabric complexity

  • Data Warehousing — Teradata, Greenplum, and Vertica Nodes — High aggregate memory bandwidth across four CPU sockets, 13 PCIe Gen 3 expansion slots for NVMe SSDs and high-speed network adapters, and up to 12 NVMe PCIe SSDs deliver the columnar query scan throughput for multi-terabyte analytical workloads on Teradata, Greenplum, Vertica, and Cloudera Impala

  • High Performance Computing — 2-Socket UPI-Optimized Configuration — The R940 uniquely offers a 2-socket configuration delivering 50% more Intel UPI bandwidth compared to a standard 2-socket server by leveraging the quad-socket UPI fabric with only two processors populated; well-suited for MPI-intensive HPC workloads including CFD, seismic processing, and financial Monte Carlo simulations that benefit from maximum inter-socket coherency bandwidth per node

  • Enterprise ERP and OLTP at Scale — 4-socket NUMA topology with up to 6 TB DDR4 LRDIMM supports multi-terabyte SAP ECC, Oracle E-Business Suite, Oracle Financials, and Microsoft Dynamics ERP workloads without scale-out complexity; single-node consolidation eliminates distributed transaction overhead for large-scale OLTP databases

  • Scale-Up Virtualization and Private Cloud — Consolidate large VM estates onto a single 4-socket host providing up to 112 vCPU cores, 6 TB DDR4, and 13 PCIe slots for networking and storage fabric cards; Intel UPI inter-socket fabric minimizes NUMA penalty for memory-heavy VMs in VMware vSphere, Microsoft Hyper-V, and RHEL KVM environments requiring maximum per-host compute density

  • Mission-Critical Oracle and SQL Server Databases — The R940's 4-socket coherent shared-memory architecture is designed for Oracle RAC nodes, Oracle Database Enterprise Edition, and SQL Server Always On clusters that demand consistently low-latency access to terabytes of hot OLTP data while maintaining high I/O throughput to NVMe or SAS storage simultaneously

  • eCommerce and CRM Infrastructure — High memory capacity and 13 PCIe expansion slots accommodate combined database, application tier, and storage controller cards in a single chassis for eCommerce platforms, Salesforce-style CRM backends, and order management systems that must sustain millions of transactions per day under seasonal peak load

Dell PowerEdge R940 — Configuration Options Overview
Parts Supported

Quad 2nd Generation Intel® Xeon® Scalable — Up to 28 Cores Per Socket and 112 Cores Total

  • Up to 4 × LGA 3647 Sockets (2nd Gen Xeon Scalable) — Full quad-processor support across the 2nd Gen Xeon Scalable Gold and Platinum lineup; flagship Xeon Platinum 8280 delivers 28 cores at 2.7 GHz and 205 W TDP per socket; quad configuration reaches 112 cores / 224 threads for maximum core-count density powering large SAP HANA, Oracle, and HPC workloads in a 3U chassis

  • 2-Socket HPC Configuration — 50% More UPI Bandwidth — The R940 uniquely supports a specialized 2-socket configuration that delivers 50% more Intel UPI inter-socket bandwidth compared to a standard dual-socket server by leveraging the full 4-socket UPI mesh topology with only two processors installed; purpose-built for bandwidth-bound HPC and data science workloads before scaling to quad configuration

  • Intel Ultra Path Interconnect (UPI) — Up to 3 × Intel UPI links per processor at 10.4 GT/s for low-latency high-bandwidth coherent inter-socket communication across the 4-socket topology; replaces QPI from prior Xeon E7 generations with significantly improved aggregate inter-CPU bandwidth for NUMA-sensitive analytics and database workloads

  • Intel C620 Chipset — PCH provides ACPI 4.0, PCIe 3.0 lanes, xHCI USB 3.0, Intel Active Management Technology, Trusted Execution Technology, VT-d, Rapid Storage Technology Enterprise, and Intel Node Manager 4.0 ME for a comprehensive enterprise I/O baseline across all four processor sockets

  • Up to 48 PCIe 3.0 Lanes Per CPU — 192 total PCIe lanes across all four Xeon Scalable processors feed the R940's 13-slot PCIe expansion fabric, rNDC, and PERC controller simultaneously; critical for configurations combining NVMe SSDs, 100G NICs, and PERC RAID controllers without lane contention

  • Intel AVX-512 and Deep Learning Boost — AVX-512 FMA units accelerate matrix math, compression, cryptographic operations, and linear algebra; Deep Learning Boost VNNI instructions run INT8 vector neural network inference directly in the CPU for lighter AI inference models across the quad-CPU fabric without GPU offload

  • Six Memory Channels Per Socket — Each 2nd Gen Xeon Scalable processor drives 6 DDR4 channels with 2 DIMMs per channel for 12 DIMM slots per socket; four sockets populate all 48 DIMM slots in full dual-DIMM-per-channel configuration for maximum memory aggregate bandwidth across the entire platform

48-Slot DDR4 — Up to 6 TB LRDIMM or 15.36 TB with Intel Optane Persistent Memory

  • 48 DDR4 DIMM Slots — Twelve DIMM slots per CPU organized into six DDR4 channels with 2 DIMMs per channel; all 48 slots active in full quad-processor configurations; single-DIMM-per-channel configurations achieve peak DDR4 speed of 2933 MT/s with select 2nd Gen Xeon Scalable Gold and Platinum SKUs

  • Up to 6 TB DDR4 LRDIMM — 48 × 128 GB LRDIMMs achieve 6 TB total DDR4 system memory for large-scale in-memory analytics, multi-terabyte virtualization hosts, and mission-critical database nodes that require maximum capacity without Intel Optane persistent memory

  • Up to 15.36 TB with PMem + LRDIMM — 24 × Intel Optane DC Persistent Memory 512 GB DIMMs (6 per socket) combined with 24 × 128 GB LRDIMMs reach 15.36 TB total addressable memory — the highest capacity supported in the 3U platform for SAP HANA scale-up, Oracle Database In-Memory, and SQL Server In-Memory OLTP deployments

  • Intel Optane PMem Modes — Memory Mode uses DRAM transparently as L4 cache in front of PMem capacity; App Direct Mode exposes PMem as byte-addressable persistent storage with data retention across power loss for PMem-aware applications; up to 6 PMem DIMMs per CPU socket for a maximum of 24 per system

  • NVDIMM-N — Up to 384 GB — Up to 12 × 32 GB NVDIMM-N modules (384 GB total) provide battery-backed DRAM-speed persistent memory for write-ahead logs, key-value stores, and metadata structures that must survive power loss without OS-level flushing; ideal for Oracle DB redo logs and SQL Server log buffers

  • DDR4-2933 Peak Speed — 2nd Gen Xeon Scalable processors support 2933 MT/s memory; single-DIMM-per-channel configurations achieve the rated speed; dual-DIMM-per-channel configurations run at 2666 MT/s; PMem modules operate at up to 2666 MT/s in both Memory and App Direct modes

  • Advanced RAS: Mirroring, Sparing, and Fault Resilient Memory — Memory Mirroring duplicates writes across two channels for transparent failover; Single and Multi-Rank Sparing pre-allocates spare ranks for hot-swap error recovery; Dell Fault Resilient Memory protects VMware ESXi guests from DIMM faults; RDIMM and LRDIMM are both supported but cannot be mixed within a memory domain

Up to 24 × 2.5-Inch Hot-Swap Drive Bays — 12 NVMe PCIe SSDs, SAS, and SATA in the Same Chassis

  • 24 × 2.5-Inch Active Backplane Chassis — Maximum-density all-SFF configuration supporting 24 front-accessible hot-plug 2.5-inch SAS, SATA, or NVMe drives with up to 12 universal NVMe-capable PCIe slots on the active backplane; the active backplane routes NVMe lanes directly from PCIe to the drive bays for full CPU-direct-attach NVMe throughput

  • 8 × 2.5-Inch Passive Backplane Chassis — Mid-range SFF configuration for deployments where 8 hot-plug SAS/SATA drives balance density and cost; passive backplane without NVMe routing; supports an optical drive (DVD-ROM or DVD+RW) in the ODD bay position unavailable on the 24-bay active backplane chassis

  • Up to 12 NVMe PCIe Direct-Attach SSDs — In the 24-bay active backplane configuration, up to 12 of the 24 2.5-inch bays support Express Flash NVMe PCIe SSDs connected CPU-direct for the lowest-latency storage path; available in 1.6 TB, 3.2 TB, 3.84 TB, 6.4 TB, 7.6 TB, and 15.36 TB capacities; PCIe Gen 3 path eliminates SAS/SATA controller latency overhead

  • Maximum Internal Storage — Up to 184.32 TB — 24 × 7.68 TB SAS/SATA SSDs plus up to 12 × NVMe PCIe SSDs achieves up to 184.32 TB raw internal storage for high-density data lake, object storage, and warm-tier archive workloads in a single 3U chassis without external JBOD expansion

  • SAS 12 Gbps Drive Support — 10K (1.2 TB, 1.8 TB, 2.4 TB) and 15K (300 GB, 600 GB, 900 GB) SAS HDDs and SAS SSDs (400 GB–3840 GB including SED FIPS) coexist with NVMe drives in the 24-bay chassis via PERC hardware RAID controllers for mixed-tier storage pools optimized for cost and performance

  • SATA SSD and HDD — SATA 6 Gbps SSDs (240 GB through 3.84 TB) and SATA HDDs (1 TB, 2 TB at 7.2K) supported for cost-optimized high-capacity storage tiers, boot-volume configurations, and development environments under the S140 software RAID controller

  • External SAS Expansion — External 12 Gb SAS HBA and H840 rear-controller options connect Dell SAS disk shelves (MD14XX, PowerVault TL/ME4 series) and LTO tape library strings for environments requiring hundreds of additional drive bays beyond the 24 internal positions without migrating to a SAN fabric

BOSS M.2 SATA RAID Module and IDSDM with vFlash for Dedicated OS Boot Storage

  • BOSS Card (Boot Optimized Storage Subsystem) — Dedicated PCIe low-profile module hosts two M.2 SATA 6 Gbps SSDs on an independent controller; installs in a PCIe expansion slot without consuming any of the 24 front drive bays, ensuring all data bays remain fully available for workload storage in high-value NVMe and SAS configurations

  • 240 GB or 480 GB M.2 SATA Drives — Both BOSS M.2 slots support 240 GB or 480 GB SATA drives; BOSS volumes accommodate Windows Server, RHEL, SLES, and Ubuntu installations with full log retention and system temporary file storage without impacting dedicated workload drive capacity

  • Hardware RAID 1 Mirror — Integrated BOSS RAID controller presents the two M.2 SSDs as a hardware-mirrored RAID 1 volume to the OS; a single M.2 drive failure is completely transparent to the operating system with no manual failover or reboot required — critical for always-on HANA and database nodes

  • Preferred for Full OS Deployments — For bare-metal OS deployments of Windows Server, RHEL, SLES, or Ubuntu, BOSS dedicates the M.2 mirror boot volume while all NVMe and SAS/SATA drives remain entirely dedicated to workload I/O — the correct configuration for any R940 where every front bay has high analytical or database workload value

  • IDSDM — Internal Dual SD Module — Supports 2 × microSD cards (16, 32, or 64 GB each) in hardware-mirrored IDSDM mode for VMware ESXi and hypervisor-only deployments where the complete boot image fits in compact microSD form factor; eliminates the need to consume any PCIe slot or drive bay for the hypervisor volume

  • vFlash Module (16 GB) — A third microSD slot on the IDSDM card provides 16 GB iDRAC vFlash storage for OS deployment ISO images, RACADM configuration scripts, firmware staging payloads, and iDRAC Virtual Media without requiring external USB media at the data center aisle

  • Combined IDSDM + vFlash Support — The R940 supports IDSDM and vFlash simultaneously in the module bay — up to three microSD cards total for a combined persistent OS mirror boot volume plus vFlash provisioning storage, eliminating multiple external media dependencies at one installation

PERC H740P, H730P, H330, HBA330, H840 External, and S140 Software RAID

  • PERC H740P (Premium Performance) — 12 Gbps SAS/SATA hardware RAID with 4 GB or 8 GB NV cache for RAID 0/1/5/6/10/50/60; delivers maximum sustained IOPS and rebuild speed for 24-bay dense storage configurations with mixed SAS HDDs, SAS SSDs, and expander-connected drive pools in analytics and database workloads

  • PERC H730P (Value Performance) — 12 Gbps hardware RAID controller with 2 GB NV cache; proven for mid-range mixed-workload environments running SAS/SATA storage pools with hardware-accelerated parity RAID and lower total cost than the H740P NV cache premium; Mini PERC form factor preserves a general-purpose PCIe slot

  • PERC H330 (Entry Tier) — 12 Gbps RAID 0/1/5/6/10/50/60 without NV cache for cost-optimized deployments in the R940; available as Mini PERC internal form factor that does not consume a general-purpose PCIe slot; suitable for environments where rebuild performance and NV cache write-back acceleration are not primary requirements

  • HBA330 (Non-RAID Pass-Through) — 12 Gbps SAS JBOD-mode HBA for software-defined storage deployments where the OS requires direct block-device access without a PERC RAID translation layer; supported as Mini internal form factor for Ceph, GlusterFS, and other distributed storage stacks running on the R940's quad-CPU platform

  • S140 Software RAID — Intel chipset-based software RAID supports RAID 0/1/5/10 on SATA and NVMe drives using CPU cycles; entry-level option for environments with lighter I/O requirements where hardware RAID controller cost is not justified; NVMe drives under S140 still benefit from CPU-direct PCIe attach latency

  • H840 External RAID Controller — Rear-installed full-height PCIe H840 connects external SAS disk shelves (MD14XX, PowerVault ME4) and LTO tape library strings for R940 deployments requiring hundreds of additional drives beyond the 24 internal positions; 12 Gb SAS external HBA also available for software-defined external JBOD pools

  • 12 Gb SAS HBA (External) — External 12 Gbps SAS HBA available for non-RAID direct-attach external JBOD expansion; supports SAS disk shelves, tape drives, and external storage arrays in environments where the R940 serves as a compute-storage converged node requiring flexible external JBOD access without RAID overhead

Dell PowerEdge R940 — Drive Backplane and Storage Controller Connection
Parts Supported

GPU and FPGA Accelerators via 13 PCIe Gen 3 Slots in a 3U Chassis

  • 13 PCIe Slots Enable Multiple GPU Configurations — The R940's 13 PCIe Gen 3 slots (3 × x8 + 10 × x16) provide significantly more GPU expansion capacity than comparable 2U 4-socket platforms; full-height full-length double-width and single-width GPU cards install in x16 full-height slots without sacrificing PERC, NVMe, or 100G networking positioning

  • PCIe Gen 3 x16 GPU Slots — Full-height full-length x16 slots connected directly to processor CPU complexes provide maximum GPU-to-CPU memory bandwidth and minimal inter-socket NUMA hop latency for GPU workloads that require fast data transfer between host system memory and GPU device memory

  • AI Inference and Deep Learning Acceleration — GPU cards in the R940 PCIe expansion fabric accelerate AI inference, deep learning model serving, image recognition, and NLP workloads co-located with the quad-CPU compute platform; 15.36 TB addressable memory combined with GPU device memory enables large language model and dataset workflows without external GPU appliances

  • FPGA Accelerator Support — Full-height full-length FPGA accelerator cards (Intel/Xilinx variants) install in PCIe x16 expansion slots for hardware-accelerated packet processing, custom algorithm offload, encryption acceleration, and real-time signal processing workloads requiring deterministic sub-microsecond execution latency

  • GPU and NVMe + RAID Coexistence — The R940's 13 PCIe expansion slots accommodate GPU cards alongside PERC hardware RAID controllers, NVMe drive adapters, and high-speed networking cards simultaneously; configurations combining dual GPU, a PERC H740P, and a 100G NIC remain viable in the 13-slot expansion fabric where they would not be possible in a 6-slot 2U chassis

  • PSU Requirements for GPU Configurations — High-TDP GPU configurations in quad-processor R940 deployments require 2000 W or 2400 W AC PSUs to maintain power headroom for all four processors, full DIMM population, 24 drives, and GPU cards under sustained peak load simultaneously

  • Thermal Validated Envelope — GPU cards in the R940 are validated within standard 30°C recommended inlet temperature; the 3U chassis provides greater airflow cross-section than 2U platforms, improving thermal headroom for GPU load in dense rack configurations; consult R940 thermal documentation for specific GPU TDP de-rating above 30°C ambient

Dell PowerEdge R940 — Internal Chassis View

Up to 13 × PCIe Gen 3 Slots — 3 × x8 and 10 × x16 for Maximum I/O Expansion

  • 13 × PCIe Gen 3 Slots Total — The R940 provides up to 13 PCI Express Generation 3 expansion slots in a 3U chassis — 3 × x8 slots and 10 × x16 slots; this is more than double the PCIe expansion capacity of comparable 2U 4-socket servers, enabling dense multi-accelerator, multi-HBA, and multi-NIC configurations in a single node

  • 10 × Full-Height x16 Slots — Ten full-height full-length PCIe Gen 3 x16 slots provide maximum bandwidth per slot for GPUs, FPGAs, 100G NICs, InfiniBand HCAs, and NVMe host bus adapters; x16 slots can accommodate any PCIe expansion card regardless of bus width when physical slot width is compatible

  • 3 × x8 Slots — Three x8 PCIe Gen 3 slots for lower-lane-count expansion cards including 25G NICs, BOSS boot modules, FC8/FC16 HBAs, and external SAS controllers; x8 electrical slots with x16 physical connectors where present maintain physical card compatibility with the PCIe card lineup

  • Intel UPI Multi-Socket PCIe Fabric — PCIe slots are connected across processor sockets via Intel UPI; for optimal performance, PCIe cards used with specific application workloads should be mapped to the processor socket that owns the corresponding memory and CPU cores; the R940 technical guide provides detailed PCIe slot-to-CPU affinity mapping for all 13 slots

  • rNDC Integrated Slot (Separate) — The network daughter card installs in a dedicated rNDC slot at PCIe Gen 3 x8 bandwidth, consuming no general-purpose expansion slot; all 13 numbered PCIe expansion slots remain available for storage, GPU, networking, and accelerator cards alongside the base rNDC connectivity

  • Supported Expansion Cards — InfiniBand HCA (EDR/HDR) x16/x8, 100G NICs (Intel, Mellanox, Broadcom) x16, 25G NICs x8, 40G NICs x8, FC32/FC16/FC8 HBAs x8, BOSS card x4/x8, external RAID H840 x8, NVMe PCIe SSD adapters x8, GPU accelerators x16, FPGA x16, and 1G NICs x1/x4 — in full-height or low-profile depending on slot position

  • Software-Defined Infrastructure Flexibility — 13 slots provide enough bandwidth for converged compute-storage-network architectures; typical deployment combines PERC H740P, 2 × 25G rNDC offload, 2 × 100G PCIe NIC, and multiple NVMe adapters while retaining open slots for GPU or InfiniBand — configurations that fill slot budgets in smaller chassis but breathe easily in the R940

Dell PowerEdge R940 — Intel UPI PCIe Cabling and Expansion

Dell Select rNDC — 4 × 1 GbE, 4 × 10 GbE, 2 × 10GbE + 2 × 1 GbE, or 2 × 25 GbE

  • rNDC Integrated Without Sacrificing a PCIe Slot — Dell Select Network Adapters (rNDC) install in the dedicated rear rNDC slot at x8 PCIe Gen 3 bandwidth; all 13 numbered expansion slots remain available for storage, GPU, and additional networking alongside base rNDC connectivity

  • 4 × 1 GbE Option — Quad 1 Gbps copper rNDC for environments with 1 GbE top-of-rack infrastructure; suitable for management plane traffic, lightweight application servers, and control-plane connectivity where 10GbE uplink cost is not justified at every R940 node

  • 4 × 10 GbE Option — Quad 10 Gbps rNDC (SFP+ or BASE-T options) provides four independent 10GbE ports for NIC teaming, iSCSI multi-path, NFS multi-path, and VM traffic separation across four physical uplinks without consuming additional PCIe slots from the 13-slot expansion fabric

  • 2 × 10 GbE + 2 × 1 GbE Option — Hybrid rNDC configuration delivering two 10GbE ports for primary data traffic and two 1GbE ports for management, iSCSI, or dedicated backup traffic; balances high-speed data-plane bandwidth with dedicated low-speed management-plane connectivity in a single integrated adapter

  • 2 × 25 GbE Option — Dual 25 Gbps SFP28 rNDC for high-bandwidth environments with 25GbE top-of-rack switching; provides optimal throughput for SAP HANA replication, inter-node HPC communication, and backup-to-disk at high transfer rates without consuming a general-purpose PCIe expansion slot

  • Additional PCIe Networking Cards — 25G NICs (Intel, Broadcom, Mellanox), 100G NICs, 40G NICs, InfiniBand HCA EDR/HDR (Mellanox), and Omni-Path HFI can be installed in the 13 expansion slots for cluster networking, RDMA-over-Converged-Ethernet (RoCE), and high-performance HPC fabrics; the R940's 13-slot capacity easily accommodates multiple high-bandwidth NICs alongside GPU and storage cards

  • iDRAC9 Dedicated Management NIC — Separate 1 GbE iDRAC9 RJ-45 rear port provides out-of-band management traffic isolation independent of the data-plane rNDC; shared iDRAC LOM mode also available; dedicated management port keeps iDRAC accessible even when all data-plane NICs are cycled during maintenance or OS troubleshooting

Dell PowerEdge R940 — Rear Network Daughter Card rNDC
Parts Supported

Hot-Plug Redundant PSUs — 1100 W to 2600 W Platinum, Titanium, and DC Options

  • Hot-Plug Redundancy with Up to 8 PSUs — The R940 supports multiple hot-swappable rear-accessible PSU bays with full redundancy; a PSU failure is replaceable under full operational load without interrupting running workloads, VM guests, network connections, or NVMe storage I/O — critical in always-on mission-critical database and analytics environments

  • 1100 W Platinum AC and DC Options — Standard tier for configurations at moderate CPU TDP tiers with typical DIMM and drive populations; 1100 W 380VDC for China-specific DC-bus rack deployments; 1100 W 48VDC (Gold) for telecommunications rack infrastructure; all carry 80 PLUS Platinum or Gold efficiency ratings

  • 1600 W Platinum and Titanium HLAC — Recommended for full quad-socket configurations at moderate workload TDP tiers; 1600 W Titanium High Line AC achieves the highest efficiency rating in the lineup for facilities targeting PUE improvement in dense 4-socket server deployments; exceeds standard 80 PLUS Platinum specifications

  • 2000 W and 2400 W AC Platinum — Required for high-TDP configurations with Xeon Platinum 8280 (205 W × 4 = 820 W CPUs alone); 2000 W provides headroom for 4 × 205 W processors, full 48-DIMM population, 24 drives, and PCIe expansion cards under sustained peak load; 2400 W for GPU-loaded or maximum-density configurations

  • 2600 W Titanium HLAC — Highest-wattage option for maximum-density GPU-loaded or future-proofed configurations; High Line AC (200–240 V input) Titanium-grade for facilities with high-voltage PDU infrastructure; delivers maximum per-rail wattage headroom for demanding quad-processor configurations with multiple GPU cards and full NVMe population

  • Power Monitoring at 1% Accuracy — iDRAC9 real-time power consumption monitoring achieves 1% accuracy versus the industry-standard 5%; supports Dell EMC Enterprise Infrastructure Planning Tool (EIPT) for data-center power budgeting and Power Capping to enforce hard per-server watt limits for colocation billing compliance

  • 80 PLUS Certification and Energy Efficiency — Platinum and Titanium efficiency ratings across the PSU lineup comply with Climate Savers, ENERGY STAR, and 80 PLUS standards; Titanium units achieve 94–96% peak efficiency; matching PSU wattages is required when deploying multiple PSUs in redundant configurations for proper load sharing

Parts Supported

8 × Hot-Plug N+1 Fans with Intelligent Multi-Vector Thermal Control

  • 8 Hot-Plug Cooling Fans — Eight hot-plug fans in N+1 redundant configuration allow a single fan failure without triggering a thermal shutdown; failed fans are field-replaceable under full production load without removing the chassis from the rack — essential in the always-on database and analytics environments the R940 is designed for

  • Open + Closed Loop Hybrid Thermal Control — Open-loop pre-computed fan speed tables load from the system BOM at startup; closed-loop feedback from CPU, DIMM, PCH, inlet air, NVMe, and PCIe temperature sensors continuously refines fan speeds to the minimum required for full thermal compliance across all four processor sockets and the 13-slot PCIe expansion fabric

  • Standard 10–35°C Operating Range — Full component support for all CPU TDP tiers, PMem configurations, and GPU options within the standard recommended ambient temperature range; DAPC (Dell Active Power Controller) fan profile minimizes fan power consumption while maintaining all component thermal margins in the 3U chassis airflow path

  • Extended Fresh Air (5–40°C) — Continuous operation up to 40°C ambient for thermally compliant configurations; higher-TDP quad-Xeon Platinum configurations with 205 W TDP processors may require ambient de-rating reviewed in the R940 thermal guidelines documentation above 30°C

  • 3U Chassis Thermal Advantage — The R940's 3U chassis depth and height provide greater airflow cross-section than 2U 4-socket designs; wider fan blades and longer airflow path improve heat dissipation for the higher total TDP of quad-socket plus multi-GPU configurations without exceeding the validated thermal envelope

  • NVMe PCIe SSD Airflow Considerations — Full 24-bay NVMe configurations require higher airflow than SAS/SATA-only builds; iDRAC9 BIOS thermal profiles include NVMe-specific fan configurations to balance acoustic output with NVMe drive operating temperatures at sustained 100% I/O workloads in dense flash storage deployments

  • User-Configurable Thermal Profiles — iDRAC9 BIOS thermal settings include Performance Per Watt (DAPC/OS), Performance Optimized, and Maximum Performance modes; Max Exhaust Temperature and Fan Speed Offset are configurable for colocation environments with strict per-rack BTU budgets and specific exhaust temperature caps

Dell PowerEdge R940 — Cooling Fan Array

Dual Front USB 3.0, iDRAC Direct, Front VGA, and Full Rear I/O Panel

  • Front USB 3.0 × 2 — Two SuperSpeed USB 3.0 (5 Gbps) ports on the front control panel for OS installation media, USB diagnostic tools, and temporary portable storage without routing cables to the rear of the chassis while racked; front USB 3.0 standard on the R940 (superseding USB 2.0 on prior generations)

  • Front iDRAC Direct (Dedicated USB Micro-AB) — Dedicated Micro-AB USB port with LED status indicator for direct laptop connectivity to iDRAC9 without requiring network access; the LED illuminates during active iDRAC Direct sessions for quick visual confirmation at the rack aisle during field diagnostics

  • Front VGA Port — 1 × VGA connector on the front control panel for monitor console access during POST diagnostics, BIOS configuration, RAID setup utility, and OS installation at the rack without routing display cables to the rear panel of a racked 3U chassis

  • Rear USB 3.0 × 2 — Two SuperSpeed USB 3.0 (5 Gbps) ports on the rear panel for persistent keyboard/mouse attachments, external USB storage, long-term diagnostic drives, and KVM adapter dongles in rack-mounted environments with rear access

  • Rear VGA and Serial — 1 × VGA display port and 1 × DB-9 serial port on the rear panel; the serial port supports iDRAC9 Serial-over-LAN (SOL) for headless serial console redirect through out-of-band management without a physical serial terminal — essential for headless data center deployments

  • Additional VGA via Video Card — An additional 2 × VGA outputs are provided via the onboard video card for environments requiring dual display access at the server, or where the front and rear VGA connections are both in use simultaneously during complex deployment procedures

  • iDRAC9 Dedicated Management Port and System ID — 1 × dedicated 1 GbE iDRAC9 RJ-45 management port on the rear panel for out-of-band management traffic isolation; System ID button with blue LED for rack identification during maintenance; optional Quick Sync 2 BLE/Wi-Fi bezel for front panel wireless management via mobile device

Dell PowerEdge R940 — Internal USB Port Description

Cyber Resilient Architecture — Silicon Root of Trust, TPM, Secure Boot, and System Erase

  • Silicon Root of Trust — Factory-burned cryptographic identity in iDRAC9 silicon validates every firmware component in the boot chain before any host CPU instruction executes; hardware-anchored trust is immune to OS-layer and hypervisor-layer firmware injection attacks that bypass software-only validation in production SAP HANA and Oracle environments

  • Cryptographically Signed Firmware — All firmware packages — BIOS, iDRAC, PERC, NIC, PSU — carry Dell-issued digital certificates verified by Lifecycle Controller at install time; Lifecycle Controller rejects modified or unsigned firmware, preventing supply-chain firmware tampering across all R940 components from factory to data center

  • UEFI Secure Boot — Verifies all bootloader and kernel module signatures before the OS security stack loads; prevents rootkits, unauthorized OS images, and pre-boot malware from executing during the pre-OS initialization phase when host security agents and endpoint protection are not yet active

  • TPM 2.0 and TPM 1.2 (Optional) — Pluggable Trusted Platform Module provides hardware-rooted key storage for BitLocker volume encryption, vTPM support for VMware, Intel TXT-based measured boot, platform attestation, and platform identity certificates; TPM 2.0 NationZ available for China-regulatory compliance

  • System Lockdown Mode — iDRAC9-enforced lockdown policy prohibits all hardware and firmware configuration changes from BIOS, iDRAC, RACADM, and WS-Man until the authorized administrator removes the lockdown token; prevents configuration drift across regulated R940 deployments in financial services, healthcare, and government environments

  • System Erase (NIST 800-88 Secure Erase) — Cryptographic and overwrite erase for all internal storage media including SSDs, HDDs, NVMe PCIe drives, NVDIMM flash, IDSDM microSD cards, and optionally CPU volatile memory for NIST 800-88-compliant decommissioning at end of server lease or redeployment in regulated industries

  • Physical Security Features — Chassis cover intrusion switch detects unauthorized chassis opening; optional locking security bezel restricts physical drive access; toolless cover latch with optional keyed lock; power-button disable configurable via BIOS for environments where unauthorized physical power-off is a compliance or availability concern

Parts Supported

iDRAC9 with Lifecycle Controller, RESTful Redfish API, Quick Sync 2, and OpenManage

  • iDRAC9 Embedded Out-of-Band Controller — Dedicated management processor on its own power plane with independent 1 GbE NIC; provides persistent hardware inventory, component health alerting, remote KVM console, and 1% power monitoring accuracy regardless of host OS state — always-on management for always-on SAP HANA and Oracle nodes

  • Lifecycle Controller 3.x — Agent-free system provisioning, OS deployment, firmware baseline update, hardware configuration, and log collection operate entirely through iDRAC9 without a running OS; touch-free bare-metal deployment from a remote console is fully supported across all R940 processor configurations including the 2-socket HPC variant

  • iDRAC RESTful API with Redfish — Full DMTF Redfish 1.0 standards-based JSON REST API enables infrastructure-as-code automation from Ansible Playbooks, Terraform modules, ServiceNow workflows, Python scripts, and PowerShell DSC for fleet-scale R940 lifecycle management, health monitoring, and capacity reporting

  • Quick Sync 2 Wireless Module (Optional) — BLE + Wi-Fi wireless bezel module enables iDRAC9 inventory read, RACADM command push, and firmware update trigger from the Dell OpenManage Mobile app on a smartphone at the front panel without a dedicated laptop connection — useful during physical data center tours and spot checks

  • OpenManage Enterprise — Single-console lifecycle management for the complete PowerEdge fleet including automated discovery, firmware compliance baselining, policy push, alert escalation to ticketing systems, and per-server power consumption dashboards across all R940 nodes in the data center

  • Ecosystem Integration — OMIVV for VMware vCenter manages R940 health and firmware from within the vSphere client; OpenManage Ansible Modules automate provisioning in CI/CD pipelines; integrations for BMC TrueSight, Microsoft System Center, Red Hat Ansible, Nagios Core/XI, and IBM Tivoli cover major enterprise IT operations platforms

  • SupportAssist Embedded — Proactive and predictive diagnostics engine embedded in iDRAC9 automatically creates Dell Support cases, dispatches replacement parts, and generates AI-based failure probability scores for drives, DIMMs, fans, and PSUs; reduces unplanned downtime by detecting component anomalies before production impact in mission-critical SAP HANA deployments

Parts Supported

Windows Server, RHEL, SLES, VMware ESXi, Ubuntu, Oracle Linux, and Citrix Hypervisor Certified

  • Windows Server LTSC with Hyper-V — Full Microsoft Hyper-V host certification for the R940's 4-socket NUMA topology; Windows Admin Center cluster management; iDRAC Service Module (iSM) enables host-to-iDRAC health and power reporting integration; SQL Server on Windows with Always On Availability Groups fully validated

  • VMware ESXi — VMware Hardware Compatibility Guide (HCG) certified for all major ESXi versions; OMIVV for vCenter plugin manages R940 health and firmware compliance from within vSphere; PMem App Direct Mode exposed to individual VMs for persistent-memory-aware guest workloads; DCPMM vNVDIMM provisioning supported

  • Red Hat Enterprise Linux (RHEL) — RHEL 7 and 8 certified for long-term enterprise Linux deployments including OpenShift Container Platform bare-metal worker nodes; RHEL for SAP HANA with Intel Optane PMem App Direct Mode validated for the R940's multi-terabyte in-memory SAP scale-up configurations

  • SUSE Linux Enterprise Server (SLES) — SLES certified including SLES for SAP Applications; DCPMM-enabled SAP HANA scale-up on SLES on the R940 achieves the highest SAP HANA memory capacity in the Gen 14 platform lineup; SLES kernel NVM-PM drivers include full Optane PMem DIMM health monitoring integration

  • Canonical Ubuntu Server LTS — Ubuntu LTS for OpenStack Compute nodes, Kubernetes bare-metal worker hosts, Ceph OSD nodes, and developer infrastructure; long-term security update cadence aligns with R940 operational lifespans; Ubuntu Advantage support available for critical data center Ubuntu deployments

  • Oracle Linux — Certified for Oracle Database, Oracle RAC (Real Application Clusters), and Oracle Middleware on Unbreakable Enterprise Kernel (UEK); hardware vendor support eligibility on certified Dell PowerEdge hardware critical for enterprise Oracle deployments requiring co-support agreements between Oracle and Dell

  • Citrix Hypervisor (XenServer) — Citrix Hypervisor certified for VDI (Citrix Virtual Apps and Desktops), hosted private cloud, and multi-tenant application delivery; 4-socket memory capacity sustains high-density VDI farms with large per-VM memory allocations that exhaust single-socket and dual-socket platforms

Dell PowerEdge R940 — 8-Bay 2.5-Inch Drive Configuration

ReadyRails Sliding and Static for All 19-Inch 4-Post and 2-Post Rack Types

  • ReadyRails Sliding — Standard (Drop-In) — Tool-less drop-in installation in 19-inch square or unthreaded round-hole 4-post racks; tooled install for threaded racks; full-extension slide for DIMM, drive, fan, PCIe card, and processor servicing without removing the 3U chassis from the rack; square-hole adjustment range supports standard EIA-310-E 4-post cabinets

  • ReadyRails Sliding — Stab-In/Drop-In (Gen 14) — New Gen 14 stab-in design required for Dell EMC Titan and Titan-D racks; supports square, round, and threaded round-hole racks across the full depth adjustment range; recommended for mixed-cabinet environments deploying R940 alongside other Dell EMC enclosures

  • Optional Cable Management Arm (CMA) — CMA attaches to the sliding rail rear bracket and organizes all rear cable bundles (power, SAS, network) during full-extension chassis service; minimum rack depth with CMA installed requires a deep cabinet for safe 3U chassis extraction; cable slack is managed by the CMA during slide-out

  • ReadyRails Static — Stab-in installation for widest rack compatibility including 19-inch square, round, and threaded 4-post plus 2-post Telco racks; no CMA compatibility in static configuration; stab-in design allows fast deployment in colocation environments with mixed rack types

  • 3U Chassis Profile — 130.3 mm (5.13 inches) height occupies exactly three rack units; 434 mm (17.08 inches) full-width chassis fits standard 19-inch EIA-310-E compliant racks; chassis depth 784.2 mm (30.87 inches); three rack units provide more space than 2U siblings for internal airflow and component access during servicing

  • Weight and Lift Requirements — Maximum weight approximately 49.9 kg (110 lbs) with all drives and full component population; a 2-person lift is required per OSHA ergonomic guidelines for chassis removal; plan for 2-person rack team during initial installation events and quarterly drive or component swap operations

  • Dell Rack Compatibility — ReadyRails Stab-In/Drop-In required for Dell EMC Titan and Titan-D rack enclosures; standard sliding rails for PowerEdge-series Dell racks; static rails for non-Dell third-party cabinets where sliding rail minimum depth cannot be accommodated; all rail variants are compatible with the standard EIA-310-E 4-post rack pattern

Parts Supported

R940 vs R930 — Cascade Lake, Intel UPI, DCPMM, iDRAC9, BOSS, NVMe, and 13 PCIe Slots

  • 2nd Gen Xeon Scalable vs Xeon E7-4800 v3/v4 — Xeon Scalable (Cascade Lake-SP, LGA 3647) replaces Broadwell-EX (LGA 2011-1); 50% more memory channels per socket (6 vs 4), Intel UPI replacing QPI, AVX-512, Deep Learning Boost, and full DCPMM/NVDIMM support — none of these features exist on any E7-4800 v3/v4 SKU regardless of binning

  • Up to 112 Cores vs 96 Cores Total — 4 × 28-core Xeon Platinum 8280 on R940 reaches 112 cores versus 4 × 24-core E7-4890 v4 (96 cores) on R930; 17% more cores per node for parallel database processing and HPC workloads, plus AVX-512 double-width SIMD execution unavailable on any E7-generation SKU

  • 15.36 TB Addressable Memory vs DDR4-Only — R940 reaches 15.36 TB total addressable memory with 24 × 512 GB Intel Optane PMem DIMMs plus LRDIMMs; the R930 had no persistent memory support whatsoever — DCPMM requires 2nd Gen Intel Xeon Scalable processors and the Gen 14 platform memory controller architecture

  • Up to 12 Native NVMe Drives (New) — The R930 had no native CPU-direct-attach NVMe support; the R940 supports up to 12 Express Flash NVMe PCIe SSDs connected directly to processor PCIe lanes through the active backplane for sub-100 µs latency storage access — transforming peak storage bandwidth for analytics and database I/O

  • 13 PCIe Gen 3 Slots vs Fewer — The R940 provides 13 PCIe Gen 3 expansion slots (3 × x8 + 10 × x16) versus typically 8–10 slots on the R930; the additional PCIe capacity accommodates configurations combining PERC RAID, 100G NICs, NVMe adapters, and GPU cards simultaneously without slot conflicts

  • BOSS M.2 Boot Module (New) and iDRAC9 — The R940 adds BOSS dedicated M.2 SATA RAID 1 boot volume, freeing all 24 front drive bays for data storage; iDRAC9 adds Silicon Root of Trust, Redfish RESTful API, Quick Sync 2, Server Lockdown, System Erase, and PMem health monitoring — all absent from iDRAC8 on the R930

  • 3U vs 4U — Smaller Footprint, More Capability — The R940 delivers substantially greater capability in 3U versus the R930's 4U chassis; the same 4-socket socket count in one fewer rack unit reduces rack space consumption by 25% while adding NVMe, DCPMM, more PCIe slots, and iDRAC9 in the same data center footprint

Feature R930 (Gen 13) R940 (Gen 14)
Processor Family Xeon E7-4800 v3 / v4 (Broadwell-EX) 2nd Gen Xeon Scalable (Cascade Lake-SP)
CPU Interconnect Intel QPI Intel UPI (up to 3 links @ 10.4 GT/s)
Max Cores Per Socket 24 (E7-4890 v4) 28 (Xeon Platinum 8280)
Max Cores Total 96 (4 × 24) 112 (4 × 28)
Memory Channels / Socket 4 6
DDR4 Max Speed 2133 MT/s 2933 MT/s
Intel Optane DCPMM Not supported Up to 15.36 TB (24 × 512 GB)
NVDIMM-N Not supported Up to 384 GB
Native NVMe Drives Not supported Up to 12 CPU Direct-Attach
PCIe Expansion Slots ~8–10 slots 13 PCIe Gen 3 (3 × x8 + 10 × x16)
BOSS M.2 Boot Module Not available 2 × M.2 SATA RAID 1 (240/480 GB)
Remote Management iDRAC8 iDRAC9 with Lifecycle Controller
Silicon Root of Trust Not available Hardware-anchored iDRAC9
Quick Sync Wireless Not available Quick Sync 2 BLE/Wi-Fi
Form Factor 4U Rack 3U Rack

ProSupport Plus with SupportAssist and ProDeploy for R940 Deployments

  • ProSupport Plus — Dell's highest-tier support plan with SupportAssist automated monitoring, predictive failure scoring for drives, fans, DIMMs, and PSUs, and an assigned Services Account Manager for proactive R940 fleet management, performance baseline recommendations, and planned maintenance coordination for mission-critical installations

  • SupportAssist Embedded — Replaces manual support workflows with automated issue detection, case creation, and parts dispatch; AI-driven predictive analysis detects storage, cooling, and DIMM pre-failure indicators before production impact — especially critical for 4-socket always-on SAP HANA, Oracle, and HPC workloads where unplanned downtime carries significant business cost

  • ProSupport — 24×7×365 certified hardware and software engineer access with on-site next-business-day or 4-hour mission-critical parts and labor response for R940 fleets; human-escalated resolution within defined SLO response windows for environments with formal availability SLAs requiring measurable downtime bounds

  • ProSupport One for Data Center — Site-wide support contract covering R940 plus Dell EMC storage and networking under one agreement with assigned field and technical account managers; designed for large data center environments with multiple R940 server rows requiring unified cross-platform support coverage

  • ProDeploy Enterprise Suite — Certified Dell deployment engineers handle rack-and-stack (Basic Deployment), OS and firmware baseline configuration (ProDeploy), or full environment assessment, migration planning, SAP HANA DCPMM sizing, and knowledge transfer (ProDeploy Plus) for complex R940 4-socket installations

  • Residency Services — On-site or remote Dell experts available for Intel Optane PMem App Direct Mode configuration for SAP HANA and Oracle workloads, 4-socket NUMA topology planning, VMware vSAN ReadyNode cluster deployment, and HPC MPI fabric tuning for the 2-socket UPI-optimized R940 configuration

  • TechDirect Self-Service — Online portal for self-dispatching replacement parts, opening and managing support cases without phone escalation, API integration with internal ITSM ticketing systems, and accessing Dell certification and training resources for R940 administrators and infrastructure teams

Dell PowerEdge R940 — Power Entry Module

Frequently Asked Questions — Dell PowerEdge R940

The Dell PowerEdge R940 supports up to 6 TB of DDR4 LRDIMM RAM across 48 DIMM slots (12 per processor × 4 sockets) at speeds up to 2933 MT/s. With Intel Optane DC Persistent Memory (DCPMM), total addressable memory reaches 15.36 TB using 24 × 512 GB PMem DIMMs combined with 24 × 128 GB LRDIMMs. Up to 384 GB of NVDIMM-N battery-backed persistent memory is also supported. All 48 DIMM slots require a quad-processor configuration. Configure your R940 memory at ECS.

The Dell PowerEdge R940 supports up to four 2nd Generation Intel® Xeon® Scalable processors (Cascade Lake-SP) in LGA 3647 sockets, up to 28 cores each. A fully configured quad-socket R940 delivers 112 cores and 224 threads. The R940 also operates in a unique 2-socket HPC configuration that provides 50% more Intel UPI bandwidth than a standard 2-socket server. Single, dual, and quad-processor configurations are all supported, with unused sockets requiring blank filler kits for proper airflow. Build your R940 at ECS.

The Dell PowerEdge R940 supports up to 13 PCIe Gen 3 expansion slots — 3 × x8 slots and 10 × x16 slots. This is substantially more than comparable 2U 4-socket servers, enabling configurations combining PERC hardware RAID, multiple high-speed NICs, GPU accelerators, FPGA cards, and NVMe adapters in the same chassis without slot conflicts. The rNDC network daughter card installs in a separate dedicated slot and does not count against the 13-slot PCIe total.

The Dell PowerEdge R940 is a 3U rack server. Dimensions: 130.3 mm (5.13”) H × 434 mm (17.08”) W × 784.2 mm (30.87”) D. Maximum weight is approximately 49.9 kg (110 lbs.) fully loaded. The 3U profile occupies three rack units in a standard 19-inch EIA-310-E cabinet and is notably more compact than its Gen 13 predecessor (R930, 4U) while delivering substantially greater performance — same 4-socket count in one fewer rack unit.

Yes. Express Computer Systems stocks professionally reconditioned refurbished Dell PowerEdge R940 servers tested and configured to your exact processor, memory, storage, and networking specifications. Whether you need a 4-socket SAP HANA scale-up platform with Intel Optane DCPMM, a dense 24-bay NVMe analytics node, or a 2-socket HPC configuration with maximum UPI bandwidth, ECS builds your R940 to specification and ships it ready to rack and power on. Shop refurbished Dell R940 servers at ECS.

 

Express Computer Systems

Ready to Deploy the Dell PowerEdge R940?

Express Computer Systems offers professionally reconditioned Dell PowerEdge R940 servers configured to your exact specifications — quad-processor count, Intel Optane DCPMM memory tier, 24-bay NVMe storage, GPU configuration, or 2-socket HPC bandwidth setup. Our team tests every unit and backs every order with our quality guarantee.

Start building your custom server today