Dell PowerEdge R750xs Specs and Features

Dell PowerEdge R750xs

15th Gen 2U Dual-Socket Rack Server — Up to 2 × 32 Cores · 1 TB DDR4 · 24 Drives · PCIe Gen 4

Gen 15 2U Rack Dual Socket 3rd Gen Xeon Scalable Up to 64 Cores 1 TB DDR4 ECC PCIe Gen 4 NVMe + SAS/SATA iDRAC9
Dell PowerEdge R750xs — 15th Gen 2U Dual-Socket Scale-Out Rack Server

Purpose-Built 2U for Virtualization, Scale-Out Databases, and Software-Defined Storage

  • Enterprise Virtualization — The R750xs delivers dual 3rd Gen Xeon Scalable sockets with up to 64 physical cores, 1 TB DDR4, and PCIe Gen 4 bandwidth for mid-to-large virtual machine density; 2U chassis allows up to 12 drives for per-host local storage — a cost-efficient virtual infrastructure workhorse for vSphere, Hyper-V, and KVM environments standardizing on a single dual-socket 2U platform

  • Medium VM Density and Non-GPU VDI — Purpose-listed in Dell's own workload guide as the target for medium VM density and non-GPU VDI deployments; 16 DDR4 RDIMM slots with 1 TB ceiling and up to 64 physical cores provide the core-to-memory ratio needed for hosted desktop and application delivery without the cost of GPU add-in cards; OCP 3.0 25 GbE handles PCoIP and HDX display protocol bandwidth at scale

  • Scale-Out Database Tiers — Dual-socket core count, PCIe Gen 4 NVMe local storage, and 3200 MT/s DDR4 memory bandwidth serve MySQL, PostgreSQL, SQL Server, and Oracle database instances requiring sustained OLTP throughput; the 24-drive (16+8 NVMe) maximum configuration puts 184 TB of raw NVMe-and-SAS capacity in a single 2U chassis — a storage-rich scale-out database node

  • Software-Defined Storage Nodes — HBA355i pass-through mode enables OS-managed Ceph OSD, vSAN, or ZFS configurations; up to 12 × 3.5-inch SAS/SATA HDDs yield 192 TB raw per node at 7.2K rpm cost-per-GB economics; OCP 3.0 25 GbE fabric connectivity provides the per-node storage replication bandwidth required for large Ceph clusters and vSAN stretched-cluster architectures

  • High-Performance Computing (HPC) — CPU-bound simulation, CFD, genomics, and financial modeling workloads scale across both Xeon Scalable sockets; PCIe Gen 4 with 64 lanes per socket provides the bus bandwidth for high-speed IB or 100 GbE compute fabric adapters; 2U density maximizes node count per rack in compute clusters where memory capacity is secondary to raw core and bus throughput

  • Data Warehousing and Analytics — In-memory analytic queries benefit from 1 TB DDR4 RDIMM capacity and 3200 MT/s per-channel bandwidth; NVMe Gen 4 local scratch storage accelerates data-intensive ETL pipelines and parallel sort/join operations; up to 24 drives in a single 2U chassis allows entire hot-tier analytic datasets to reside locally without network-attached storage latency

💻

Virtualization

🖥️

VDI

🗄️

Databases

💾

SDS / Ceph

HPC

📊

Analytics

Dell PowerEdge R750xs — Purpose-Built 2U for Virtualization Scale-Out Databases and Software-Defined Storage
Parts Supported

3rd Gen Intel Xeon Scalable — Up to 2 × 32 Cores, 11.2 GT/s UPI, 64 PCIe Gen 4 Lanes per Socket

  • Platform Architecture — Dual LGA4189 sockets on Intel C621A chipset; supports single or dual 3rd Gen Intel Xeon Scalable processors (Ice Lake-SP); Intel Ultra Path Interconnect (UPI) provides up to 3 links per CPU at 11.2 GT/s (Gold 6 and Platinum) or 10.4 GT/s (Silver and Gold 4) for inter-socket NUMA bandwidth; 64 PCIe Gen 4 lanes per socket at 16 GT/s deliver double the per-lane bandwidth of the Gen 14 predecessorplatform

  • Peak Config — Xeon Gold 6338 (32 Cores) — 32 cores / 64 threads, 2.0 GHz base, 36 MB L3 cache, 11.2 GT/s UPI, 205 W TDP, 3200 MT/s DDR4; in dual-socket configuration delivers 64 cores / 128 threads from a single air-cooled 2U chassis; suited for demanding HPC, virtualization density, and large database workloads within the R750xs thermal envelope; 6334 at 4.0 GHz base (8 cores, 165W) addresses high-frequency single-threaded workloads

  • Xeon Platinum 6338 (32 Cores / High Efficiency) — 6338N variant at 185 W TDP and 2666 MT/s memory — optimized for scale-out nodes where power efficiency per core matters more than peak memory bandwidth; 6336Y (24c, 185W, 3200 MT/s) provides a 24-core option with full Gold 6 interconnect speed at lower chip cost

  • Xeon Gold Mid-Tier (16–28 Cores) — 6326 (16c, 2.9 GHz, 185W), 6330 (28c, 2.0 GHz, 205W), 5320 (26c, 2.2 GHz, 185W), 5318Y (24c, 2.1 GHz, 165W); 11.2 GT/s UPI; 2933–3200 MT/s DDR4; Gold mid-tier balances virtualization core count with lower acquisition premium — strong dual-socket option for organizations migrating multi-VM workloads from R730/R740 infrastructure

  • Xeon Silver Series — 5317 (12c, 3.0 GHz, 150W), 5315Y (8c, 3.2 GHz, 140W), 5320T (20c, 2.3 GHz, 150W); 10.4 GT/s UPI; 2933 MT/s DDR4; Silver maximizes base clock frequency relative to core count — selected for latency-sensitive application middleware, licensing-per-core workloads, and dual-socket configurations where per-socket core count is limited by software licensing costs

  • Xeon Gold 4 Series (Entry Dual Socket) — 4316 (20c, 2.3 GHz, 150W), 4314 (16c, 135W), 4310 (12c, 120W), 4310T (10c, 105W), 4309Y (8c, 105W); 10.4 GT/s UPI; 2666 MT/s DDR4 max; Gold 4 is the most cost-efficient dual-socket option on the R750xs — right-sized for mid-scale file and application servers and branch-office infrastructure where dual-socket redundancy and PCIe Gen 4 bus access matter more than peak memory bandwidth

  • TDP Range and Fan Profiles — Processor TDPs from 105 W (Gold 4309Y) to 220 W (Gold 6338 / 6314U); iDRAC9 automatically selects Standard, High Performance Silver, or High Performance Gold fan profile based on installed CPU TDP; up to 6 hot-swap fans with N+1 fan redundancy; no Direct Liquid Cooling (DLC) option on R750xs — all processors operate within air cooling

  • Single-Socket Operation — R750xs ships and runs with a single CPU installed; second socket left empty; iDRAC automatically adjusts memory population rules, PCIe riser availability, and fan profile; second matched processor can be added later enabling full dual-socket performance without chassis replacement — a low-risk phased scaling option for growing workloads

Parts Supported

16-Slot DDR4 RDIMM — Up to 1 TB at 3200 MT/s — 8 Memory Channels per CPU

  • 16 DDR4 RDIMM Slots — 16 total DIMM slots distributed as 8 per processor across 8 memory channels; at 1 DIMM per channel (1 DPC), DDR4 RDIMM operates at maximum rated speed of 3200 MT/s without bandwidth penalty; the R750xs single-DIMM-per-channel architecture ensures each memory channel runs at full speed regardless of total installed capacity

  • Maximum Capacity — 1 TB — Supported DIMM sizes: 8 GB RDIMM 1Rx8, 16 GB RDIMM 2Rx8, 32 GB RDIMM 2Rx8, and 64 GB RDIMM 2Rx4; maximum 1 TB using 16 × 64 GB RDIMMs at 3200 MT/s; RDIMM-only — no LRDIMM or Intel Optane PMem 200 Series support; ECC registered design reduces memory loading for greater stability at high capacity configurations

  • Memory Speed by CPU Tier — Gold 6 and Platinum: up to 3200 MT/s at 1 DPC; Silver series: up to 2933 MT/s; Gold 4 series: up to 2666 MT/s; workloads requiring maximum DDR4 bandwidth (HPC, analytics, large in-memory databases) should be paired with Gold 6 or Platinum processors to fully exploit 3200 MT/s across all populated channels

  • Typical Configuration Scales — 64 GB (8 × 8 GB) — entry two-socket; 128 GB (8 × 16 GB) or 256 GB (8 × 32 GB) — mid-tier virtualization; 384 GB (12 × 32 GB) — database and file servers; 512 GB (16 × 32 GB) — high-density VDI; 1 TB (16 × 64 GB) — maximum analytic and in-memory workloads; any configuration scales up by adding matched DIMMs without chassis replacement

  • RAS and ECC Features — Registered DIMMs provide hardware error correction at the DIMM register buffer; supports Single Device Data Correction (SDDC), Demand and Patrol Scrubbing, Memory Rank Sparing, and Multi-Rank ECC; iDRAC9 reports correctable and uncorrectable memory errors in real-time lifecycle event logs — enabling predictive DIMM replacement before uncorrectable errors cause unplanned downtime

  • Voltage and Standards — DDR4 1.2 V operation; complies with JEDEC DDR4 SDRAM specification; 1.2 V DDR4 vs 1.5 V DDR3 reduces per-DIMM power consumption by 20%; at full 1 TB population (16 × 64 GB RDIMMs) 1.2 V operation keeps chassis memory power draw manageable within the 2U air-cooled thermal budget

  • R750xs vs R740 Memory Comparison — R740 supported 24 DIMM slots at 2933 MT/s maximum with LRDIMM/Optane PMem options; R750xs reduces to 16 DIMM slots but raises DDR4 speed to 3200 MT/s and switches to Gen 4 PCIe bus; organizations migrating R740 workloads within a 1 TB DDR4 ceiling gain per-channel bandwidth improvement (+267 MT/s) and full Gen 4 platform benefits even as total DIMM slot count decreases

Parts Supported

Up to 24 Drives (16 SAS/SATA + 8 NVMe) — 184 TB Raw in 2U + 2 Rear Drives

  • 24 × 2.5-inch Front (16 SAS/SATA + 8 NVMe) — Maximum configuration with up to 16 SAS/SATA plus 8 NVMe U.2 Gen 4 hot-plug bays; maximum 184.32 TB raw; universal backplane supports mixed SAS and NVMe in the same chassis — combine NVMe for hot-tier IOPS with SAS HDDs for capacity tier without deploying a separate JBOD shelf; ideal for software-defined storage and tiered database configurations

  • 16 × 2.5-inch SAS/SATA — 16 SFF hot-plug bays; max 122.88 TB; supports SAS 12 Gb/s, SATA 6 Gb/s HDD and SSD; suitable for mid-density virtualization hosts and database servers requiring RAID-protected SAS/SATA storage without NVMe cost premium; all 16 bays operate as hot-plug for zero-downtime drive replacement

  • 12 × 3.5-inch LFF SAS/SATA — 12 Large Form Factor hot-plug bays; max 192 TB (12 × 16 TB SAS/SATA); highest raw capacity configuration in the R750xs lineup; optimized for scale-out data-intensive workloads: Ceph OSD nodes, Hadoop data tiers, NAS, video surveillance archival, and backup landing zones where cost-per-TB ranks above IOPS density

  • 8 × 3.5-inch LFF SAS/SATA — 8 Large Form Factor hot-plug bays; max 128 TB; entry LFF configuration for capacity-first storage servers; simplified drive management with fewer bays reduces operational complexity for branch-office NAS, file server, and backup workloads; 8-drive LFF chassis is the lowest-cost R750xs entry storage configuration

  • 8 × 2.5-inch SAS/SATA/NVMe — 8 SFF hot-plug front bays; max 61.44 TB; supports SAS, SATA, and NVMe on universal backplane; reduced-bay chassis for compute-first workloads requiring limited local storage; PCIe Gen 4 NVMe performance is identical on a per-drive basis across all 2.5-inch bay configurations

  • Rear Drive Bays — Up to 2 × 2.5-inch — Optional rear 2.5-inch SAS/SATA/NVMe cage adds up to 2 additional hot-plug drives; max 15.36 TB in rear bays; ideal for a dedicated OS mirror (BOSS-independent), write-intensive journal/log tier SSD, or NVMe cache device without consuming front data bays; rear drives supported on 8 × 2.5-inch and select other front configurations

  • No-Drive Bay Configuration — Diskless chassis with no front backplane; used for compute-only nodes where all storage is provided by SAN, NAS, or converged network fabric; reduces chassis weight and acoustic output; BOSS-S2 module can still provide OS boot without any backplane drive installed

  • NVMe Gen 4 Drive Options — 2.5-inch U.2 NVMe PCIe Gen 4 SSDs up to 7.68 TB per slot; Gen 4 provides 16 GT/s per-slot bandwidth — each NVMe U.2 SSD operates at its full rated interface speed without sharing PCIe lanes through a SAS expander; enables all-flash performance approaching dedicated AFA arrays within the 2U chassis footprint

Dell PowerEdge R750xs — Up to 24 Drives 16 SAS SATA Plus 8 NVMe 184 TB Raw in 2U
Parts Supported

PERC H755N NVMe RAID + H745 + H345 fPERC — Full 15G RAID Stack on PCIe Gen 4

  • PERC H755 (Premium SAS/SATA RAID) — 12 Gb/s SAS + SATA on PCIe Gen 4 with NV Flash-Backed write cache; RAID 0, 1, 5, 6, 10, 50, 60; NV cache preserves write data through unexpected power events without battery or supercapacitor maintenance cycles; highest-endurance RAID tier for OLTP databases, ERP transaction journals, and write-heavy RAID 6 arrays on the R750xs

  • PERC H755N (NVMe-Native RAID) — NVMe-native RAID controller on PCIe Gen 4 with NV write cache; RAID 0, 1, 5, 6, 10, 50, 60 across NVMe Gen 4 U.2 SSDs; enables hardware-level data protection on all-NVMe configurations without running NVMe drives in unprotected JBOD mode — critical for production databases and latency-sensitive analytics arrays where NVMe RAID rebuild speed and data integrity are non-negotiable

  • PERC H745 (Value Performance RAID) — 12 Gb/s SAS + SATA on PCIe Gen 4 with write-back cache; RAID 0, 1, 5, 6, 10, 50, 60; mid-tier between H755 and H345 for virtualization hosts and moderate-intensity database workloads requiring RAID parity and write caching at lower controller cost; compatible with all SAS/SATA drives supported on the R750xs

  • PERC H345 fPERC — Zero PCIe Slot Consumption — 12 Gb/s SAS front PERC installs in a dedicated integrated planar slot — not a user PCIe slot; RAID 0, 1, 10; keeps all user PCIe Gen 4 expansion slots available for NICs, HBAs, or other add-in cards; essential for configurations needing RAID plus multiple full-height PCIe add-in cards simultaneously; software-defined storage HBA pass-through mode also available

  • HBA355i Internal SAS Pass-Through — 12 Gb/s SAS pass-through, presents drives directly to the OS; mandatory for Ceph OSD, vSAN, ZFS, and other software-defined storage platforms that manage their own parity and fault tolerance; HBA355i installs in standard PCIe slot or integrated PERC slot; removes the RAID controller from the I/O path for OS-managed storage stacks

  • PERC H840 External RAID + HBA355e External SAS — H840 external RAID controller enables attachment of Dell MD14xx PowerVault JBODs and ME4 storage arrays for capacity expansion beyond internal bays; HBA355e SAS 12 Gb/s rear-panel port for OS-pass-through to JBODs and tape libraries; dual-path SAS fabric provides fail-safe path redundancy to attached JBODs for SAN-connected deployments

  • S150 Software RAID — Firmware-managed SATA + NVMe software RAID; RAID 0, 1, 5, 10; no additional PCIe card required; lowest-cost RAID option for non-critical data tiers, single-drive OS installations, and development environments; RAID activity runs on the Xeon Scalable host processor — avoid on production storage arrays under sustained application load

  • Boot Optimized Storage Subsystem S2 (BOSS-S2) — HWRAID 2 × M.2 SATA SSDs in hardware mirror (RAID 1); dedicated non-backplane controller keeps OS boot volume completely separate from all PERC-managed data drives; M.2 SSD module installs in integrated slot — does not occupy a PCIe expansion slot or a front drive bay; full OS boot with zero impact on data storage capacity

Dell PowerEdge R750xs — PERC H755N NVMe RAID H745 H345 fPERC Full 15G RAID Stack
Parts Supported

BOSS-S2 Hardware RAID M.2 Mirror + IDSDM Dual SD — Dedicated OS Boot Options

  • BOSS-S2 (Boot Optimized Storage Subsystem) — Hardware RAID 1 mirror across 2 × M.2 SATA SSDs in a dedicated integrated module; BOSS-S2 installs in a proprietary planar connector — does not consume a front drive bay or user PCIe expansion slot; mirrored M.2 SSDs provide hardware-redundant OS boot without relying on software RAID or a PERC controller; the definitive OS boot option for production deployments

  • BOSS-S2 M.2 SSD Capacity — Available in 240 GB and 480 GB M.2 SATA SSD pairs; sufficient capacity for all supported bare-metal and hypervisor operating systems including VMware ESXi, Windows Server with Hyper-V, Red Hat Enterprise Linux, SUSE, Ubuntu, and Citrix Hypervisor; OS logs and swap are recommended on a separate PERC-managed data volume rather than the BOSS module

  • IDSDM (Internal Dual SD Module) — Dual microSD card slots in an integrated module; hardware RAID 1 mirror across two microSD cards (16 GB, 32 GB, or 64 GB capacity); provides a low-cost OS boot option for hypervisors with compact footprints (VMware ESXi stateless/diskless and similar); no moving parts — microSD cards are field-replaceable without tools; ideal for diskless compute nodes booting over the network with local state on SD

  • Internal USB 3.0 Port — One optional internal USB 3.0 port provides a discrete internal mounting point for USB boot media; concealed inside the chassis for security hardening; supports USB boot for recovery media, PXE-boot fallback, or one-time deployment utilities; the internal USB port does not consume a rear-panel connector and is not visible from outside the chassis

  • Separating OS from Data Storage — BOSS-S2 and IDSDM completely separate OS boot volume from all PERC-managed data drives; eliminates OS-to-data drive I/O contention at boot and during OS swap; all front and rear data bays remain exclusively available to PERC RAID arrays, HBA pass-through disks, or NVMe namespaces — no front drive bays are allocated to OS boot in a BOSS or IDSDM configuration

Up to 5 × PCIe Gen 4 + 1 × PCIe Gen 3 Slots — Riser 1A and 1B (SNAPI) Configurations

  • PCIe Gen 4 Platform — 64 PCIe 4.0 lanes per socket at 16 GT/s; total slot configuration: up to 5 × PCIe Gen4 low profile half-length slots plus 1 × PCIe Gen3 (x8/x4 lane) low profile; Gen 4 doubles per-lane bandwidth versus the R740's Gen 3 platform — critical for 100 GbE / InfiniBand HDR NICs, NVMe RAID controllers, and GPU compute cards that saturate Gen 3 bandwidth constraints

  • Riser 1A Configuration (Config 0 and Config 3) — 2-slot riser providing Slot 3 (x16, low profile, CPU1) and Slot 4 (x16, low profile, CPU2 — Config 0 / CPU1 for Config 3); both slots are 75 W; Config 0 is the standard dual-CPU layout for maximum card-to-CPU PCIe bandwidth distribution; recommended when installing two independently operating high-bandwidth cards (dual 25 GbE, Fibre Channel HBA pairs)

  • Riser 1B / SNAPI Configuration (Config 1 and Config 2) — 2-slot SNAPI riser providing Slot 3 (x16, low profile, CPU1) and Slot 4 (x8, low profile, CPU1); SNAPI support via Riser 1B enables SNAPI-form-factor add-in cards for high-speed NIC, HCA, and compute accelerator connectivity per SNAPI standard; both slots are 75 W; Config 1 and 2 preferred when SNAPI-form NIC cards are the primary add-in requirement

  • Slot Priority and Card Compatibility — Internal fPERC occupies a dedicated integrated slot — does not consume a user PCIe slot; BOSS S2 module uses integrated slot; OCP 3.0 NIC uses integrated OCP slot; Slots 3–6 and 1–2 (numbered per riser config) are available for NICs, HBAs, FC adapters, PCIe SSDs, and compute cards; up to 6 dual-port 25 GbE cards or 6 FC32 HBAs supported in appropriate configurations

  • 100 GbE and InfiniBand HCA Support — Mellanox and Intel 100 GbE NIC cards supported in Slot 4/3/5/6/1 (Config 0 and 3); Mellanox HDR100 VPI InfiniBand HCA supported in Slot 3 only (SNAPI riser); HDR100 VPI provides 100 Gb/s InfiniBand OR 100 GbE dual-personality connectivity — a single card that spans HPC cluster fabric and data center networking environments

  • No GPU Support — R750xs does not support GPU or co-processor add-in cards; the chassis thermal envelope and riser card slot height (low profile only) preclude full-height GPU installation; organizations requiring GPU compute must evaluate the Dell PowerEdge R750xa (GPU-optimized) or R750 (full) chassis — the R750xs is optimized for CPU-bound, storage-rich, and network-intensive workloads without GPU acceleration

Dell PowerEdge R750xs — Up to 5 PCIe Gen 4 Plus 1 PCIe Gen 3 Slots Riser 1A and 1B SNAPI
Parts Supported

2 × 1 GbE Embedded LOM + 1 × OCP 3.0 Slot — Up to 25 GbE Without Consuming a PCIe Slot

  • Embedded 2 × 1 GbE LOM — Two 1 GbE LOM ports using Broadcom BCM5720 LAN controller; available on all R750xs chassis without any add-in cards; shared LOM architecture ties one LOM port to iDRAC for out-of-band management redirect — the iDRAC dedicated NIC and shared LOM port operate independently, with iDRAC traffic isolated from production data traffic when shared LOM is configured

  • OCP 3.0 Slot — PCIe Gen 4 x16 — One OCP 3.0 SFF (Small Form Factor) slot on PCIe Gen 4 x16 lanes at 16 GT/s; OCP 3.0 NIC installs flush to the rear panel without consuming any of the three user PCIe expansion slots; power consumption up to 150 W; supports SNAPI through Riser 1B; all R750xs user PCIe slots are fully available when OCP 3.0 provides the primary production NIC

  • OCP 3.0 NIC Portfolio — Up to 25 GbE — Vendor options: Intel SFP+ 10 GbE 2-port; Broadcom BT 1 GbE 4-port; Broadcom BT 10 GbE 2-port; Broadcom SFP28 25 GbE 2-port; Broadcom SFP28 25 GbE 4-port; Broadcom SFP+ 10 GbE 2-port; QLogic BT 10 GbE 2-port; QLogic SFP+ 10 GbE 2-port; QLogic SFP28 25 GbE 2-port; Intel BT 1 GbE 4-port; Intel BT 10 GbE 2-port; Intel SFP+ 10 GbE 4-port; Intel SFP28 25 GbE 2-port; Mellanox SFP28 25 GbE 2-port; SolarFlare SFP28 25 GbE 2-port

  • PCIe Add-In NICs — Up to 100 GbE — Mellanox 100 GbE (SFP56) and Intel 100 GbE NIC cards installable in PCIe expansion slots; Mellanox HDR100 VPI InfiniBand HCA in Slot 3 (SNAPI riser); Broadcom, Intel, QLogic, and SolarFlare 25 GbE SFP28 dual-port cards across all user slots; up to 12 total 25 GbE ports from 6 dual-port add-in cards in max slot configuration

  • Fibre Channel HBA Options — Broadcom and Marvell FC32 Fibre Channel (32 Gb/s) HBAs; Avago and QLogic FC16 HBAs; up to 6 FC HBAs in R1A or R1B configurations; FC HBAs provide direct SAN fabric connectivity to Dell EMC PowerStore, PowerVault, and third-party SAN arrays without Ethernet-based iSCSI or FCoE overhead

  • iDRAC Dedicated Management Port — 1 × 1 GbE dedicated iDRAC management port (Micro-AB USB on front; rear Ethernet port for out-of-band network connectivity); iDRAC management network is fully isolated from production data network when using the dedicated port — management plane separation required for security-hardened deployments and PCI-DSS, HIPAA, and NIST 800-53 aligned data center configurations

Dell PowerEdge R750xs — 2x 1GbE Embedded LOM OCP 3.0 Networking Rear Panel Diagram
Parts Supported

600 W to 1400 W Mixed-Mode Hot-Swap PSUs — Platinum and Titanium Efficiency, 1+1 Redundant

  • PSU Portfolio Overview — Four AC PSU options in 60mm form factor: 600 W Platinum Mixed Mode, 800 W Platinum Mixed Mode, 1100 W Titanium Mixed Mode, and 1400 W Platinum Mixed Mode; all support 100–240 Vac OR 240 Vdc input in a single unit (Mixed Mode); DC-only options: 1100 W -48Vdc for telecoms environments; all are hot-swap with 1+1 redundancy

  • 1400 W Platinum (100–240 Vac / 240 Vdc) — Maximum wattage option for fully loaded dual-CPU, 12-drive configurations; 80 PLUS Platinum efficiency (>94% at 50% load); Mixed Mode input eliminates separate AC/DC power supply SKU requirements for datacenters mixing AC and 240 Vdc bus; recommended for 205–220 W TDP dual-socket configurations with fully populated drive bays

  • 1100 W Titanium (100–240 Vac / 240 Vdc) — Highest efficiency tier; 80 PLUS Titanium (>96% at 50% load); 1100 W capacity serves the majority of R750xs dual-socket configurations; Titanium efficiency reduces heat rejection per watt — measurable operating cost savings in PUE-sensitive data centers at scale; recommended for energy-conscious deployments and high-density rack rows

  • 800 W Platinum (100–240 Vac / 240 Vdc) — 80 PLUS Platinum; right-sized for single-CPU or low-TDP dual-CPU configurations (Gold 4 tier, Silver tier); 800 W reduces redundant PSU cost for deployments that do not require 1100 W+ headroom; Mixed Mode input universally accepted — an efficient entry-level PSU option for standard R750xs deployments with modest storage and processor TDP requirements

  • 600 W Platinum (100–240 Vac / 240 Vdc) — Entry-level PSU for low-density single-socket or minimal-drive configurations; 80 PLUS Platinum efficiency; selected for cost optimization in ROBO, branch-office, and compute-only diskless node deployments where peak chassis power draw is well under 500 W; redundant 600 W pair provides 1+1 protection without over-provisioning PSU capacity

  • 1100 W -48 Vdc — DC-input PSU for telecom and carrier-grade data centers operating on -48 Vdc to -60 Vdc power distribution bus; provides the same 1100 W output as the AC variant but directly from DC bus without rectifier overhead; required for edge telecom racks, central office deployments, and NEBS-adjacent environments standardized on DC power distribution

  • Hot-Swap and 1+1 Redundancy — Both PSU bays support hot-plug replacement while the server runs; iDRAC9 monitors PSU wattage, voltage, current, and health in real-time; PSU mismatch warning raised in BIOS, iDRAC, and LCD if installed PSU wattages do not match; Dell PSU power monitoring accuracy is 1% versus the industry standard 5% — more precise power budget management for colocation billing and capacity planning

Parts Supported

Intelligent Air Cooling — Up to 6 Hot-Swap Fans with Automatic iDRAC9 Fan Profile Selection

  • Air Cooling Only — R750xs supports air cooling exclusively; no Direct Liquid Cooling (DLC) option is available for this chassis; all processor TDPs from 105 W to 220 W operate within the air-cooled thermal envelope; standard data center airflow (front-to-rear) eliminates liquid cooling infrastructure requirements and simplifies deployment in standard colocation and enterprise rack rows

  • Up to 6 Hot-Swap Fans — Up to six individually hot-swappable fan modules with N+1 fan redundancy; a single failed fan does not interrupt server operation — the remaining fans increase RPM automatically to compensate while the failed unit is hot-replaced without a service window; fan failure generates a iDRAC9 alert and LCD indicator for immediate visibility

  • Three Fan Tier Profiles — iDRAC9 automatically selects between Standard (STD), High Performance Silver (HPR Silver), and High Performance Gold (HPR Gold) fan tier based on installed CPU TDP and chassis drive configuration; high TDP processors (205–220 W) automatically invoke HPR Gold fans; lower-TDP configurations (105–150 W) operate on STD or Silver fans with lower acoustic output and reduced power consumption

  • Sensor-Driven Adaptive Cooling — Extensive temperature sensors on CPU, memory, PCIe slots, drive backplane, and power supplies feed iDRAC9 thermal model in real-time; fan speed is dynamically scaled to maintain target inlet and exhaust temperatures; during light workloads fan speed decreases to minimum — reducing acoustic levels and fan power draw; under sustained maximum load fans ramp to maintain component temperatures within specification

  • Acoustical Performance — Software Defined Storage configuration (12 × 3.5-inch + rear 2 × 2.5-inch, dual CPU 150 W, 1400 W PSU): idle 6.7 B(A) / operating 6.7 B(A) @ 23°C; entry configuration (8 × 3.5-inch, single CPU 105 W, 800 W PSU): idle 4.7 B(A) / operating 4.7 B(A) @ 23°C; data center category 5 (SDS) and category 2 (entry) per Dell EMC acoustic standard

  • Fresh Air and ASHRAE A3/A4 Support — R750xs supports ASHRAE A3 (40°C inlet) and A4 (45°C inlet) thermal specifications for select configurations — enabling deployment in economizer-mode and fresh-air cooling data centers that reduce mechanical cooling energy; see the Dell EMC PowerEdge R750xs Technical Specifications for detailed thermal restriction tables by CPU, storage, and drive configuration

  • Power Monitoring Accuracy — Dell's PSU power monitoring accuracy is 1% vs. the industry-standard 5%; more accurate power reporting allows tighter power cap configurations without sacrificing performance headroom; iDRAC9 power capping (Dell Node Manager integration) enforces per-server power limits for rack PDU circuit protection and colocation power budgeting

Parts Supported

Front iDRAC Direct, Rear Dual Ethernet + USB 3.0 + VGA — Full I/O Without Add-In Cards

  • Front Panel Ports — 1 × iDRAC Direct (Micro-AB USB) for direct out-of-band management via laptop USB cable without network connectivity; 1 × USB 2.0 for bootable media, keyboard, or OS configuration devices; 1 × VGA for direct console display output; quick-diagnosis access to all three port types from the server front without opening the rack — essential for on-site field service and initial system setup

  • Rear Panel Ports — 1 × USB 2.0; 1 × Serial port (optional, for legacy serial console); 1 × USB 3.0; 2 × 1 GbE Ethernet (embedded LOM, BCM5720); 1 × VGA; rear USB 3.0 supports external tape drives, USB storage for firmware update media, and persistent USB-attached devices that remain connected through server operations

  • Dedicated iDRAC Management Ethernet Network Port — 1 × 1 GbE dedicated iDRAC NIC on rear panel providing always-on out-of-band management network connectivity; completely isolated from both embedded LOM production ports; required for zero-trust management network architectures and compliance-mandated OOB management separation in PCI-DSS and FedRAMP environments

  • Internal USB 3.0 — Optional 1 × USB 3.0 internal port concealed inside chassis; supports permanently mounted USB boot media, OS license dongles, and security keys without consuming an external rear port; internal mounting prevents accidental disconnection and removes external USB attack surface — aligned with STIG and CIS hardening guidance for minimizing externally accessible ports

  • VGA Video Specifications — Matrox G200e integrated video controller; supports resolutions from 1024 × 768 up to 1920 × 1200 at 60 Hz; 8/16/32-bit color depth; front and rear VGA ports operate independently; rear VGA connects KVM switches for rack-level console management; front VGA supports direct monitor attachment for hands-on troubleshooting without disturbing the rear KVM connection

  • Quick Sync 2 Wireless Module (Optional) — Optional BLE (Bluetooth Low Energy) module on server front enables iDRAC9 configuration and status reading via Dell OpenManage Mobile on a smartphone; supports initial iDRAC IP configuration, viewing system health summary, and basic management tasks without network connectivity — reduces on-site visit scope for remote edge deployments and co-located servers requiring first-power-on configuration

Dell PowerEdge R750xs — Front iDRAC Direct Rear Dual Ethernet USB 3.0 VGA Connectivity

iDRAC9 with Lifecycle Controller — Out-of-Band, RESTful/Redfish, OpenManage Enterprise Integration

  • iDRAC9 — Embedded Out-of-Band Management — Integrated Dell Remote Access Controller 9 provides always-on server management independent of host OS state; available in Express (basic), Enterprise (full OOB), and Datacenter (telemetry streaming) tiers; dedicated management NIC port isolates iDRAC traffic from production network; supports IPMI 2.0, SNMP, SSH CLI, web GUI, and RESTful API with Redfish

  • iDRAC Service Module (iSM) — In-band agent running in the host OS that extends iDRAC9 capability to OS-level metrics (process list, OS uptime, storage driver data, NIC team status); iSM enables iDRAC9 to see inside the OS without requiring a separate management agent stack — integrates natively with iDRAC9's out-of-band telemetry for a unified hardware-and-OS monitoring view

  • Lifecycle Controller 3.x — Persistent firmware and configuration management embedded separately from the host OS; handles bare-metal OS deployment, driver packages, BIOS/iDRAC/firmware updates, and RAID configuration from a pre-boot environment; Lifecycle Controller operations persist across OS reinstalls — configuration profiles survive complete OS wipe; enables zero-touch reprovisioning from iDRAC without PXE boot infrastructure

  • RESTful API with Redfish (DMTF Standard) — iDRAC9 exposes full management capability through DMTF Redfish REST API; Python, PowerShell, and Ansible scripts can query hardware inventory, monitor sensor values, execute firmware updates, manage power state, and configure BIOS settings programmatically; GitHub-published Dell scripting libraries accelerate automation adoption; Redfish compliance enables multi-vendor management tool integration

  • OpenManage Enterprise — One-to-Many Console — Dell EMC OpenManage Enterprise provides fleet-level lifecycle management across multiple generations of PowerEdge servers from a single console; covers hardware inventory, firmware update campaigns, configuration compliance, alert centralization, and deployment templates; integrates with VMware vCenter, Microsoft System Center, Red Hat Ansible, ServiceNow ITSM, and Nagios for full-stack IT operations management

  • Quick Sync 2 and OpenManage Mobile — Optional BLE Quick Sync 2 module pairs with OpenManage Mobile iOS/Android app for smartphone-based server access; allows technicians to read server health, view iDRAC event logs, configure iDRAC IP, and initiate remote power operations from a mobile device at the rack — reduces on-site visit time for distributed edge and co-located deployments

  • iDRAC9 Telemetry Streaming (Datacenter License) — iDRAC9 Datacenter tier enables real-time telemetry streaming of CPU, memory, NIC, storage, power, and thermal sensor data to external metrics platforms (Splunk, Grafana, Prometheus); Telemetry Streaming eliminates the polling latency of traditional SNMP traps — delivers continuous time-series infrastructure data for predictive analytics and SLA-driven capacity management

Cyber Resilient Architecture — Silicon Root of Trust, Secure Boot, TPM 2.0, and System Lockdown

  • Silicon Root of Trust — Immutable silicon-based trust anchor in the iDRAC9 ASIC cryptographically verifies iDRAC firmware integrity at every power-on; chain of trust extends from silicon through iDRAC, BIOS, and Lifecycle Controller before the host OS loads any code; a compromised iDRAC firmware image fails silicon verification and prevents boot — protecting against supply-chain and firmware-level persistent threats

  • Secured Component Verification — Validates hardware component authenticity during manufacturing and at first boot; each component's cryptographic signature is verified against Dell's supply-chain database; detects counterfeit, tampered, or substituted components before they reach production — a published supply-chain security capability for organizations with CMMC, FedRAMP, and DISA STIG compliance requirements

  • Cryptographically Signed Firmware — All firmware packages (BIOS, iDRAC, PERC, NIC, PSU) are digitally signed by Dell; iDRAC9 validates the firmware signature before applying any update — preventing unsigned, modified, or counterfeit firmware from being installed even if an attacker gains administrative access to the management interface; aligns with NIST 800-193 Platform Firmware Resiliency guidelines

  • TPM 1.2 / 2.0 — FIPS and CC-TCG Certified — Hardware Trusted Platform Module (TPM) available in TPM 1.2, TPM 2.0 (FIPS 140-2 certified, CC-TCG certified), and TPM 2.0 China NationZ; TPM stores cryptographic keys, BitLocker volume keys, and platform measurements (PCR values) for remote attestation; supports self-encrypting drive (SED) unlock workflows and BitLocker hardware key storage without software key escrow

  • Secure Boot and UEFI — UEFI Secure Boot verifies bootloader and kernel signatures before execution; prevents unsigned OS loaders, bootkits, and rootkits from executing in the pre-OS environment; manages approved Secure Boot key database via iDRAC9 or the BIOS setup utility; compliant with Microsoft Windows Server, RHEL, SUSE, and Ubuntu Secure Boot signing chains

  • System Lockdown Mode — Available with iDRAC9 Enterprise and Datacenter license; when enabled, prevents all configuration changes to BIOS, iDRAC, RAID, and firmware — even by local users with physical access; all change attempts are rejected and logged in the iDRAC lifecycle log; lockdown enforces configuration freeze for PCI-DSS cardholder environments and HIPAA-covered systems where unauthorized configuration drift is a compliance violation

  • Secure Erase and Data Sanitization — iDRAC9 Secure Erase command triggers NIST 800-88-compliant cryptographic erase on self-encrypting drives and overwrite on standard drives; entire Lifecycle Controller log history and iDRAC configuration can be wiped simultaneously through a single iDRAC API command; required workflow for decommissioning and re-provisioning servers in multi-tenant environments

  • BIOS Live Scanning and Rapid OS Recovery — BIOS live scanning (iDRAC9 Datacenter) continuously verifies BIOS runtime integrity without rebooting; Rapid OS Recovery enables iDRAC9 to reboot to a known-good OS image stored on BOSS-S2 or IDSDM if the primary OS partition fails integrity check — reducing mean time to recovery (MTTR) for OS-level compromise or corruption events

Parts Supported

Certified for VMware ESXi, RHEL, Windows Server, Ubuntu, SUSE, and Citrix Hypervisor

  • VMware ESXi — VMware ESXi certified on PowerEdge R750xs; supports all current ESXi releases on Dell's Hardware Compatibility List; iDRAC9 integrates with VMware vCenter via Dell OpenManage Integration for VMware vCenter (OMIVV) — enabling vCenter-native firmware update, hardware inventory, and lifecycle management without leaving the vSphere UI; vRealize Operations Manager integration provides AI-driven capacity analytics across R750xs clusters

  • Microsoft Windows Server with Hyper-V — Windows Server 2019 and 2022 certified; supports Hyper-V Server and Windows Server + Hyper-V role; iDRAC9 integrates with System Center Operations Manager and System Center Virtual Machine Manager via Dell OpenManage Integration for Microsoft System Center (OMIMSSC) — Windows-centric IT operations teams manage R750xs hardware lifecycle directly within SCOM and SCVMM dashboards

  • Red Hat Enterprise Linux (RHEL) — RHEL certified across current major releases; supports RHEL with KVM hypervisor, Red Hat OpenShift on bare metal, and RHEL for SAP HANA; Red Hat Ansible integration via OpenManage Ansible Modules enables declarative R750xs hardware configuration management through standard Ansible playbooks — identical RHEL automation workflows scale from one server to thousands

  • SUSE Linux Enterprise Server (SLES) — SLES certified including SLES for SAP Applications; SAP HANA on Dell PowerEdge is a certified configuration for in-memory analytics and ERP workloads within the 1 TB R750xs DDR4 ceiling; SUSE Manager integration enables patch management and configuration compliance across R750xs SLES deployments

  • Canonical Ubuntu Server LTS — Ubuntu Server LTS certified; supports Kubernetes, OpenStack, Ceph, and MAAS (Metal as a Service) bare-metal provisioning on R750xs; Dell OpenManage Ubuntu packages available for in-band hardware monitoring; popular platform for cloud-native workload deployments on refurbished PowerEdge infrastructure

  • Citrix Hypervisor — Citrix Hypervisor (formerly XenServer) certified for XenApp and XenDesktop VDI infrastructure; R750xs dual-socket core count and 1 TB DDR4 ceiling support medium-density VDI session hosting; Citrix is a primary workload target in Dell's own R750xs target workload documentation alongside VMware Horizon and Microsoft RDS

  • OS Certification Resources — Full OS version-level certification matrix, Hardware Compatibility Lists (HCL), and hypervisor-specific support details available at Dell.com/OSsupport; OEM-ready version of the R750xs available for custom branding from bezel to BIOS to packaging via Dell.com/OEM

Dell PowerEdge R750xs — VMware ESXi RHEL Windows Server Ubuntu SUSE Citrix Hypervisor Certified

Ready Rails II Sliding Rails + CMA + SRB — Tool-Less 19-inch Rack Installation in 2U

  • Chassis Dimensions — Height: 86.8 mm (3.41 inches / 2U); Width: 482.0 mm (18.97 inches); Depth: 707.78 mm (27.85 inches) without bezel, 721.62 mm (28.4 inches) with bezel; weight: 20.44 kg (45.06 lb) in 8 × 2.5-inch configuration up to 28.76 kg (63.40 lb) fully loaded 12 × 3.5-inch with rails and bezel; R750xs slim chassis fits standard 19-inch EIA-310-E racks with no side spacers

  • Ready Rails II Sliding Rails (B21) — Tool-less drop-in installation in 19-inch EIA-310-E square-hole or unthreaded round-hole 4-post racks including all generations of Dell racks; tooled installation in threaded-hole 4-post racks; supports full extension of the server out of the rack for serviceability of all internal components without disconnecting cables; optional CMA and SRB attachment points on both rail sides

  • Stab-in / Drop-in Sliding Rails (B22) — Alternative sliding rail for environments requiring stab-in or drop-in installation flexibility; tool-less in 19-inch square and unthreaded round-hole racks including threaded round-hole racks; full extension for in-rack service; optional CMA; outer CMA brackets removable to shorten overall rail length when CMA is not needed — eliminates interference with rear PDUs and rack doors in space-constrained rows

  • Static Rails (B20 Ready Rails) — Stab-in installation; tool-less in 19-inch square and unthreaded round-hole 4-post racks and all generations of Dell racks; tooled in threaded 4-post and Dell EMC Titan/Titan-D racks; wider rack compatibility than sliding rails; does not support in-rack serviceability or CMA attachment; screw head diameter must be 10mm or less for threaded installations; recommended for high-density populated racks where slide serviceability is not required

  • Cable Management Arm (CMA) — Optional CMA for sliding rails organizes rear cable bundles and unfolds during full server extension without disconnecting any cables; large U-shaped cable baskets accommodate dense cable loads from dual PSUs, multiple rear NIC cables, and iDRAC management cabling; open vent pattern preserves rear-panel airflow; mounts tool-less on either left or right rail side; hook-and-loop straps eliminate cable damage risk from cycling

  • Strain Relief Bar (SRB) — Optional SRB for sliding rails; two depth positions accommodate different cable load profiles and rack depths; isolates cable stress from rear server connectors during full extension; cables grouped into discrete purpose-specific bundles (power, data, management) for organized rear panel cable management; tool-less attachment to sliding rails; compatible with CMA for combined CMA + SRB cable management

  • Dell EMC Enterprise Rail Sizing and Rack Compatibility Matrix — Reference the online Rail Sizing Matrix for specific rail adjustability ranges, rack mounting flange type compatibility, rail depth with and without CMA accessories, and rack type support details; matrix covers hundreds of rack models including all current and legacy Dell EMC rack generations sold since the 13G platform launch

R750xs vs R740 — Gen 15 Delivers PCIe Gen 4, 3200 MT/s DDR4, NVMe Front, and Embedded LOM

Feature R750xs (Gen 15) R740 (Gen 14)
Processor Generation 3rd Gen Intel Xeon Scalable (Ice Lake-SP) 2nd Gen Intel Xeon Scalable (Cascade Lake)
Max Cores per CPU 32 cores / 64 threads 28 cores / 56 threads
PCIe Generation PCIe Gen 4 — 64 lanes/socket at 16 GT/s PCIe Gen 3 — lanes at 8 GT/s
PCIe Expansion Slots Up to 5 × Gen 4 + 1 × Gen 3 Up to 8 × Gen 3
Memory DIMM Slots 16 DDR4 RDIMM slots 24 DDR4 RDIMM/LRDIMM/Optane slots
Max Memory Speed 3200 MT/s DDR4 2933 MT/s DDR4
Max Memory Capacity 1 TB RDIMM 1 TB RDIMM only; up to 3 TB LRDIMM
Intel Optane PMem Not supported 12 × Intel Optane DC PMem supported
NVMe Front Drives Up to 8 NVMe U.2 Gen 4 (in 24-bay config) Up to 12 PCIe SSD (NVMe) — Gen 3
Max Front Drive Count 24 (16 SAS/SATA + 8 NVMe) 16 × 2.5-inch or 8 × 3.5-inch
Embedded NIC (LOM) 2 × 1 GbE embedded LOM (BCM5720) Not supported — add-in NIC required
NIC Slot Standard OCP 3.0 (PCIe Gen 4 x16) OCP 2.0 / rNDC (PCIe Gen 3)
PERC Controller Slot fPERC (dedicated planar slot) Standard PCIe slot (consumes user slot)
BOSS Module BOSS-S2 (HWRAID 2 × M.2 SATA) BOSS (single M.2 SATA)
UPI Interconnect Speed Up to 11.2 GT/s (Gold 6 / Platinum) Up to 10.4 GT/s
Form Factor 2U rack server (86.8 mm) 2U rack server
  • PCIe Gen 4 — The Core Upgrade Benefit — PCIe Gen 4 doubles per-lane bandwidth to 16 GT/s vs. Gen 3's 8 GT/s; 100 GbE NICs, NVMe RAID controllers, and InfiniBand HDR adapters that saturated Gen 3 bandwidth now operate at their rated speeds; every PCIe add-in card in the R750xs benefits from Gen 4 headroom — a platform-level improvement that benefits all I/O-intensive workloads without changing application code

  • 3200 MT/s DDR4 vs 2933 MT/s — R750xs raises DDR4 memory bandwidth by +267 MT/s per channel vs. R740 Cascade Lake at 2933 MT/s maximum; in an 8-channel dual-socket configuration, aggregate memory bandwidth increases by ~2.1 GB/s total — measurable improvement for in-memory analytics, HPC memory-bound workloads, and large Java heap applications where DDR4 bandwidth is the bottleneck

  • Embedded LOM Frees a PCIe Slot — R740 required a separate NIC add-in card for any Ethernet connectivity beyond the iDRAC management port; R750xs includes 2 × 1 GbE embedded LOM — freeing the PCIe slot that the R740 NIC occupied for a different add-in card (FC HBA, 25 GbE upgrade, or storage controller)

  • fPERC Frees Another PCIe Slot — R740 PERC controllers (H330, H730P, H740P) consumed a user PCIe slot; R750xs fPERC installs in a dedicated integrated planar slot — returning that PCIe slot to the user; paired with embedded LOM, R750xs effectively provides two additional free PCIe slots vs R740 in equivalent configurations

ProDeploy, ProSupport Plus, SupportAssist, and Lifecycle Services for PowerEdge R750xs

  • ProDeploy Enterprise Suite — Factory and on-site deployment services to get R750xs from unboxing into optimized production; ProDeploy Plus includes environmental assessments, migration planning, OS/hypervisor installation, OpenManage setup, and knowledge transfer; Basic Deployment provides professional installation by Dell-certified technicians; Server Configuration Services delivers systems pre-racked, cabled, RAID-configured, and BIOS-tuned before arrival

  • Dell EMC ProSupport Plus (Recommended) — Dell's premium proactive support service with assigned Services Account Manager, immediate advanced troubleshooting, personalized preventive recommendations based on support trend analysis, predictive issue detection via SupportAssist, automated case creation and proactive expert outreach, and on-demand TechDirect analytics reporting; the recommended support tier for business-critical R750xs production deployments

  • ProSupport for Enterprise — 24×7 support via phone, chat, and online; predictive automated diagnostic tools; single point of accountability for all hardware and software issues; collaborative third-party support; hypervisor, OS, and application support; optional next-business-day or 4-hour mission-critical onsite parts-and-labor response; globally consistent support experience regardless of deployment location

  • SupportAssist — Proactive and Predictive Intelligence — SupportAssist is included at no charge with all support plans; automated hardware health monitoring detects degraded components, failed drives, and thermal anomalies before they cause unplanned downtime; automatic case creation and parts dispatch on detected failures reduces mean time to repair; TechDirect integration provides on-demand ProSupport Plus analytics and self-service parts dispatch

  • ProSupport for HPC — Solution-aware HPC-specialized support tier for R750xs cluster deployments; access to senior HPC experts; advanced HPC cluster assistance for performance, interoperability, and configuration; enhanced end-to-end HPC solution support; remote pre-support engagement with HPC Specialists during ProDeploy implementation; available for R750xs nodes deployed in MPI, RDMA, and InfiniBand HPC cluster configurations

  • Residency, Remote Consulting, and Managed Services — Residency Services provides on-site or remote Dell EMC expert assistance for technology transitions and day-to-day operational management; Remote Consulting Services optimizes server configurations for specific workloads using best practices; Managed Services option reduces complexity by delegating day-to-day R750xs infrastructure operations to Dell's expert team under guaranteed SLAs

Dell PowerEdge R750xs — ProDeploy ProSupport Plus SupportAssist Lifecycle Services

Frequently Asked Questions — Dell PowerEdge R750xs

The Dell PowerEdge R750xs supports up to 24 drives in a 2U chassis — up to 16 × 2.5-inch SAS/SATA hot-plug drives plus 8 × 2.5-inch NVMe U.2 Gen 4 drives in the high-density front bay configuration, reaching up to 184.32 TB raw capacity. Alternatively, up to 12 × 3.5-inch LFF SAS/SATA drives for high-capacity spinning disk deployments up to 192 TB, or 8 × 3.5-inch for a mid-density layout. Up to 2 additional rear hot-plug bays add a further 15.36 TB.

The Dell PowerEdge R750xs has 16 DDR4 RDIMM slots supporting a maximum of 1 TB using 16 × 64 GB RDIMMs at speeds up to 3200 MT/s across 8 memory channels. Note that the R750xs supports only RDIMMs (not LRDIMMs or Intel Optane PMem 200 Series) — making it the right choice for workloads requiring fast, high-frequency DDR4 RDIMM capacity rather than extreme total memory density. Configure your R750xs memory at ECS.

The R750xs is a storage-optimized, scale-out variant of the R750 platform. Key differences: the R750xs has 16 DDR4 RDIMM slots vs 32 on the R750, a maximum of 1 TB RAM vs 8 TB (LRDIMM), no Intel Optane PMem support, no GPU support, and up to 5 PCIe Gen 4 + 1 Gen 3 slots vs 8 PCIe Gen 4 slots on the R750. In exchange, the R750xs offers more flexible storage bay configurations — up to 24 front drives (16 SAS/SATA + 8 NVMe) — at a lower cost and more compact footprint, making it purpose-built for virtualization, software-defined storage, and scale-out file servers rather than large-memory or GPU-accelerated workloads.

Yes. Express Computer Systems stocks professionally reconditioned refurbished Dell PowerEdge R750xs servers tested, cleaned, and configured to your exact drive bay, memory, and PERC controller specifications — ready to deploy for virtualization, software-defined storage, or scale-out environments at significant cost savings versus new. Shop refurbished Dell R750xs servers at ECS.

The Dell PowerEdge R750xs (Gen 15) upgrades the R740 (Gen 14) with 3rd Gen Intel Xeon Scalable processors (up to 32 cores vs 28), PCIe Gen 4 vs Gen 3, 3200 MT/s DDR4 vs 2933 MT/s, an embedded dual 1 GbE LOM absent on R740, OCP 3.0 replacing the older rNDC standard, an fPERC dedicated storage controller slot freeing user PCIe slots, BOSS-S2 with dual M.2 HWRAID replacing the older single-drive BOSS module, and NVMe Gen 4 front drive support — all within the same 2U chassis footprint.

Express Computer Systems

Ready to Deploy the Dell PowerEdge R750xs?

Express Computer Systems specializes in professionally reconditioned Dell PowerEdge servers. Every R750xs is inspected, tested, and configured to your spec — backed by our knowledgeable team and fast fulfillment.

Start building your custom server today