Dell PowerEdge R760XA Specs and Features
Dell PowerEdge R760xa
Purpose-Built GPU Acceleration in a 2U Dual-Socket Chassis
-
2U dual-socket rack server — 16th Generation PowerEdge; Regulatory Model E102S001; supports Intel processors exclusively — no AMD option
-
Differentiated from the standard R760 — features full-height, full-length front-facing GPU riser bays with dedicated GPU bridge support, enabling configurations impossible on the general-purpose R760
-
Supports up to two 4th Gen Intel® Xeon® Scalable processors (Sapphire Rapids) at LGA4677 socket; up to 56 cores per socket and 112 total cores dual-socket
-
32 DDR5 RDIMM slots — 16 per processor; up to 4800 MT/s (1DPC) or 4400 MT/s (2DPC); maximum 8 TB ECC capacity in a dual-processor build
-
Three storage bay configurations — 6 × 2.5” NVMe SFF direct-attach, 8 × 2.5” NVMe/SAS/SATA SFF, or 6 × E3.S Gen5 NVMe EDSFF
-
Up to 4 × double-wide or 12 × single-wide GPU accelerators — NVIDIA, AMD, and Intel accelerators supported; NVLink bridge cards available for multi-GPU memory pooling
-
Air cooling standard; Direct Liquid Cooling (DLC) optional — 6 hot-swap fan modules; DLC enables the highest-TDP processor + GPU combinations not achievable in air-only environments
-
Chassis — 86.8 mm H × 482 mm W × 946.73 mm D (with bezel); fully loaded weight: 27.5 kg (60.63 lbs)

Engineered for AI, HPC, and the Most Demanding Accelerated Workloads
-
AI/ML Training (Medium to Large Datasets) — front-facing FHFL GPU riser slots maximize card density and airflow for sustained training throughput across multi-GPU configurations
-
AI/ML Inferencing at Scale — up to 12 single-wide GPU cards in a 2U chassis enable high-density inferencing deployments with per-node economics not achievable in larger chassis
-
High Performance Computing (HPC) — dual 56-core Xeon Platinum processors (112 total cores) pair with PCIe Gen5 accelerators for simulation, scientific modeling, and parallel compute workloads
-
Digital Twins & Render Farms — NVLink Gen4 bridge support enables cross-GPU memory pooling, dramatically increasing effective GPU memory per model for large-scale rendering and simulation
-
Performance Virtualization & VDI — multiple GPU slots support dense virtual workstation deployments; vGPU profiles distribute GPU resources across many concurrent sessions
-
Data Analytics & GPU-Accelerated Pipelines — PCIe Gen5 x16 direct connections to front-slot GPUs deliver full bandwidth for GPU-accelerated analytics, genomics, and financial simulation
-
Containerized & Cloud-Native AI — Redfish API automation and broad OS/hypervisor certification support Kubernetes-orchestrated GPU clusters and cloud-native MLOps pipelines
4th Gen Intel® Xeon® Scalable — Up to 56 Cores Per Socket
-
Socket LGA4677 (FCLGA) — supports up to two 4th Gen Intel® Xeon® Scalable processors (Sapphire Rapids); 4th Gen only — 5th Gen processors are not supported
-
Maximum core count — 56 cores per socket (Xeon Platinum 8480+); 112 total cores dual-socket for massive parallel compute throughput
-
Ultra Path Interconnect (UPI) — up to 4 links per CPU at 12.8, 14.4, or 16 GT/s for low-latency multi-socket coherency across NUMA domains
-
PCIe 5.0 interface — up to 80 PCIe Gen5 lanes per CPU at 32 GT/s, delivering full bandwidth to all GPU riser slots and NVMe storage bays simultaneously
-
TDP range: 150 W – 350 W — entry Xeon Bronze 4410Y (12 cores / 150 W) through high-density Xeon Platinum 8480+ (56 cores / 350 W)
-
Notable processor options:
-
Xeon Platinum 8480+ — 56 cores / 2.0 GHz base / 350 W TDP
-
Xeon Platinum 8470Q — 52 cores / 2.1 GHz / 350 W (requires DLC)
-
Xeon Gold 6458Q — 32 cores / 3.1 GHz / 350 W (requires DLC)
-
Xeon Gold 6448Y — 32 cores / 2.1 GHz / 225 W (ASHRAE A2 air-cooled)
-
Xeon Bronze 4410Y — 12 cores / 2.0 GHz / 150 W (entry config)
-
-
Direct Liquid Cooling (DLC) required for Xeon Platinum 8470Q and Gold 6458Q (350 W) in all GPU + high ambient configurations; standard air cooling supports most SKUs up to 350 W under ASHRAE A2 (≤35°C)
DDR5 ECC RDIMM — 32 Slots, Up to 8 TB at 4800 MT/s
-
32 DDR5 DIMM slots — 16 per processor; 8 memory channels per CPU with 2 slots per channel for full channel utilization
-
DDR5 RDIMM only — registered ECC memory exclusively; DDR4 and LRDIMMs are not supported on this platform
-
4800 MT/s (1DPC) — single DIMM per channel at 1.1 V for maximum bandwidth; substantially higher than DDR4 Gen 15 platforms
-
4400 MT/s (2DPC) — two DIMMs per channel for maximum capacity at only modest speed reduction; ideal for large-model AI and in-memory database workloads
-
Up to 8 TB max capacity — 32 × 256 GB Octa-Rank RDIMMs in dual-socket configuration; 4 TB maximum for single-processor builds
-
Supported DIMM densities — 16 GB (1R), 32 GB (2R), 64 GB (2R), 128 GB (4R), 256 GB (8R)
-
Memory is not hot-pluggable — system must be powered off to install or replace DIMMs; plan capacity at build time
-
Population rules — dual-processor configs support 2, 4, 8, 12, 16, 24, or 32 DIMMs; populate identically per channel for optimal symmetric NUMA bandwidth
Three Chassis Options — Up to 122.88 TB Raw Capacity
-
8 × 2.5” SFF Universal (SAS/SATA/NVMe) — supports 12 Gb/24 Gb SAS SSD, 6 Gb SATA SSD, or PCIe Gen4/5 NVMe; up to 122.88 TB raw; most flexible storage configuration
-
6 × 2.5” NVMe SFF Direct-Attach — all-NVMe chassis with PCIe Gen4/5 direct-attach connectivity; up to 92.16 TB raw; designed for ultra-low-latency AI data pipelines
-
6 × E3.S Gen5 NVMe EDSFF — enterprise EDSFF form factor at full PCIe Gen5 speeds; up to 46.08 TB raw; highest per-slot bandwidth of any front storage configuration
-
All bays hot-swap — all front drive bays support tool-less hot-plug replacement under power in RAID-protected configurations
-
Max per-drive capacity — up to 15.36 TB per 2.5” 24 Gb SAS SSD; up to 7.68 TB per E3.S slot; NVMe capacities scale with available SKUs
-
Storage topology note — E3.S and 6-bay NVMe configurations bypass traditional SAS/SATA backplanes; SAS/NVMe mixed chassis supports both PERC RAID and direct NVMe-switch topologies
PERC 12 & PERC 11 RAID — Up to 24 Gb SAS with 8 GB NV Cache
-
PERC H965i (PERC 12, internal) — 24 Gb SAS / PCIe Gen4 NVMe RAID; 8 GB NV cache; RAID 0/1/5/6/10/50/60; the primary RAID controller for the R760xa
-
PERC H965e (PERC 12, external) — 24 Gb SAS external HBA/RAID for connecting external JBODs; pairs with H965i for split boot + data topologies
-
PERC H755 (PERC 11, internal) — 12 Gb SAS; 8 GB NV cache; RAID 0/1/5/6/10/50/60; cost-effective choice for SAS/SATA-only configurations
-
PERC H755N (PERC 11, NVMe) — NVMe-native RAID controller; 8 GB NV cache; ideal for all-NVMe workloads requiring hardware RAID protection
-
PERC H355 (PERC 11) — 12 Gb SAS entry RAID controller; no cache; suitable for less performance-critical RAID 0/1/5/10 configurations
-
HBA355i / HBA355e (HBA 11) — 12 Gb SAS host bus adapters for pass-through storage in software RAID or SDS deployments (no RAID processing on-card)
-
Software RAID S160 — CPU-based RAID for NVMe-only configurations requiring no dedicated controller card; RAID 0/1/5/10 supported
BOSS-N1 — Dedicated NVMe Boot Module with Hardware RAID-1
-
BOSS-N1 module — Boot Optimized Storage Subsystem, NVMe M.2 interface, hardware RAID-1 mirrored pair; successor to the Gen 15 BOSS-S2 (SATA M.2)
-
Dedicated rear slot — installs in its own rear-chassis module slot and does not consume any PCIe riser slot, front drive bay, or OCP slot
-
RAID-1 hardware mirror — two NVMe M.2 SSDs configured in hardware-managed RAID-1; OS boot redundancy with no OS-level configuration required
-
Upgrade from BOSS-S2 — NVMe interface replaces SATA M.2; significantly higher sequential read/write throughput for faster OS boot and lifecycle operations
-
Alternative boot options — optional internal USB 3.0 port available for bootable OS media; bare-metal deployments via iDRAC Zero Touch Provisioning also supported
-
ASHRAE restriction — BOSS-N1 is not supported in ASHRAE A4 environments (40–45°C ambient); plan cooling accordingly in hot-aisle/cold-aisle sealed deployments

Up to 4 Double-Wide or 12 Single-Wide GPU Accelerators
-
Double-wide GPU configuration — up to 4 × FHFL PCIe Gen5 x16 double-wide GPU cards (350 W TDP each, 1,400 W total GPU TDP); front-facing riser placement for maximum airflow
-
Single-wide GPU configuration — up to 12 × PCIe Gen5 x8 single-wide 75 W cards, or up to 8 × PCIe Gen5 x16 SW cards in alternate riser configurations
-
GPU vendors supported — NVIDIA (A/H/L series), AMD Instinct, and Intel Flex Series accelerators certified; check Dell Compatibility Matrix for per-SKU ASHRAE limits
-
Front-facing full-height, full-length risers — RF1A / RF1B risers (CPU1) and RF2A / RF2B risers (CPU2) face the front intake; enables physically longer, higher-TDP cards vs rear-mounted designs
-
NVLink / GPU bridge support — NVLink Gen4 bridge cards can be installed between adjacent GPU pairs on RF1A/RF1B and RF2A/RF2B for memory pooling up to 2× the per-card HBM
-
Thermal limits — GPUs above 400 W not supported under ASHRAE A2 air cooling; 350 W GPUs combined with CPUs >225 W require ambient ≤30°C; all high-TDP GPU + CPU combos may require DLC
-
Riser card options — R1V (rear, CPU1): 2 × FH/HL Gen5 x16; R4T (rear, CPU2): 2 × FH/HL Gen5 x16; RF1A / RF2A (front DW): 2 × FHFL Gen5 x16; RF1B / RF2B (front SW): 4 × FHFL Gen5 x8
All-Gen5 PCIe — Up to 12 Slots, CPU-Affined for Zero-Hop GPU Paths
-
Rear risers (always present) — R1V (CPU1, Slots 1–2): 2 × FH/HL PCIe Gen5 x16; R4T (CPU2, Slots 7–8): 2 × FH/HL PCIe Gen5 x16 — total 4 rear slots for standard NIC/HBA cards
-
Front GPU risers — DW config (RF1A + RF2A) — RF1A (CPU1, Slots 36, 38): 2 × FHFL Gen5 x16; RF2A (CPU2, Slots 31, 33): 2 × FHFL Gen5 x16 — supports 4 × double-wide GPUs (DW config total: 8 slots)
-
Front GPU risers — SW config (RF1B + RF2B) — RF1B (CPU1, Slots 35–38): 4 × FHFL Gen5 x8; RF2B (CPU2, Slots 31–34): 4 × FHFL Gen5 x8 — supports up to 12 × single-wide GPUs (SW config total: 12 slots)
-
CPU affinity design — RF1A/RF1B and R1V connect directly to CPU1; RF2A/RF2B and R4T connect directly to CPU2 — eliminates cross-CPU NUMA hop latency for GPU-to-CPU data paths
-
OCP 3.0 mezz slot — PCIe Gen4 x8; accepts OCP NIC 3.0 cards; LOM and OCP can both be installed simultaneously for dual-NIC configurations
-
No PCIe Gen3 slots — unlike the standard R760, every slot on the R760xa is Gen5-native; ensures full bandwidth to all installed accelerators and future-ready bandwidth headroom
Flexible NIC Options — LOM, OCP 3.0, and Dedicated iDRAC Port
-
Embedded LOM (optional) — 2 × 1 GbE rear-panel ports via Broadcom BCM5720 LAN controller; suitable for management or low-bandwidth data traffic
-
OCP 3.0 card (optional) — PCIe Gen4 x8 mezz slot; LOM and OCP can both be installed simultaneously for independent management + data NIC paths
-
OCP 25 GbE options — Broadcom or Intel SFP28 cards with 2 or 4 ports of 25 GbE for high-throughput GPU cluster interconnects
-
OCP 10 GbE options — Broadcom or Intel Base-T cards with 2 or 4 ports of 10 GbE for standard enterprise and storage traffic
-
OCP 1 GbE options — Broadcom or Intel Base-T cards with 4 ports of 1 GbE; NC-SI and Wake-on-LAN supported on all OCP variants
-
Dedicated iDRAC9 management port — separate 1 GbE rear Ethernet port for out-of-band iDRAC9 access; network traffic is completely isolated from host data NICs
-
InfiniBand / high-speed fabrics — rear riser slots (R1V, R4T) accept full-height InfiniBand or 100–400 GbE HCA cards for GPU cluster interconnects; check Dell Compatibility Matrix for certified options
Up to 2 × 2800 W Titanium PSUs — Hot-Swap Redundant
-
Two redundant PSU bays — 1+1 hot-swap 86 mm form factor; both PSU slots must be populated for redundancy in GPU-dense configurations
-
2400 W Platinum AC/HVDC — 100–240 VAC or 240 VDC; ≥94% efficiency at 50% load; C19 power cord; peak 4,080 W at high-line; degrades to 1,400 W at 100–120 VAC low-line
-
2800 W Titanium AC/HVDC — 200–240 VAC or 240 VDC only; ≥96% efficiency at 50% load; C21 power cord; peak 4,760 W at high-line; low-line operation not supported
-
DC Mixed Mode variants — 2400 W and 2800 W DC-capable PSUs available for data centers with centralized DC power distribution
-
Auto-sensing, auto-switching — no manual voltage selection required; PSUs automatically detect and adapt to available supply voltage
-
Heat dissipation — 2400 W Platinum: 9,000 BTU/hr; 2800 W Titanium: 10,500 BTU/hr; plan rack PDU and cooling capacity accordingly for fully loaded GPU configurations
Air Cooling & Optional Direct Liquid Cooling for Extreme TDP Configs
-
Standard air cooling — 6 hot-plug fan modules; N+1 redundancy; all configurations require all 6 fans populated; 2U standard fan size; no HPR (High Performance) silver/gold fan variants on this platform
-
Direct Liquid Cooling (DLC) — optional CPU-only liquid cooling; requires rack manifolds and a Cooling Distribution Unit (CDU); DLC enables 350 W processor SKUs (8470Q, 6458Q) not supported under air-only A2 conditions with high-TDP GPUs
-
ASHRAE A2 (standard air) — 10–35°C inlet; supports processors up to 350 W (except 8470Q/6458Q); GPU TDP limited to 350 W; most common data center class
-
ASHRAE A3 — 5–40°C inlet; processor TDP ≤225 W; only Nvidia A2 approved in front GPU slot under A3 ambient conditions
-
ASHRAE A4 — 5–45°C inlet; processor TDP ≤185 W; limited NVMe and OCP support; BOSS-N1 not supported; Nvidia A2 GPU only in front slot
-
DLC note — installing DLC removes the optional rear VGA port; a separate rear VGA kit for DLC configurations is available as a separate accessory
-
Operational altitude — 3,050 meters (10,006 ft) maximum; 12,000 meters (39,370 ft) non-operational storage/shipping limit
Front & Rear I/O — Including Dedicated Out-of-Band Management Port
-
Front I/O — 1 × USB 2.0 Type-A; 1 × iDRAC Direct Micro-USB (management only); 1 × VGA; Power button; Health/ID LED indicator
-
Rear I/O — 1 × USB 3.0; 1 × USB 2.0; 1 × dedicated iDRAC9 Ethernet (1 GbE, out-of-band); optional VGA (standard air config only; removed when DLC is installed)
-
Optional serial port — 9-pin DTE RS-232 (16550-compliant); installs via front expansion bracket; useful for legacy BIOS console and out-of-band access without network dependency
-
Internal USB — optional internal USB 3.0 connector for bootable OS media or USB-based lifecycle tools
-
Quick Sync 2 (optional) — Bluetooth 5.0 / NFC wireless front panel module for inventory, configuration, and health monitoring via the OpenManage Mobile app without a wired network connection
-
Integrated video — Matrox G200 graphics; 16 MB frame buffer; resolutions up to 1920 × 1200 at 60 Hz for console/KVM use during initial setup
-
System ID button — present on both front and rear panels; illuminates a physical location LED for easy identification of a server in dense rack environments

Cyber Resilient Architecture — Silicon Root of Trust to Secure Erase
-
Silicon Root of Trust — hardware-anchored boot integrity embedded in iDRAC silicon; cryptographically verifies every boot component before execution, preventing rootkit and bootloader attacks
-
Cryptographically signed firmware — all firmware packages are digitally signed; iDRAC rejects any unsigned or tampered firmware update automatically
-
Secure Boot — UEFI Secure Boot prevents unauthorized OS loaders or bootloaders from executing during system startup
-
TPM 2.0 — FIPS 140-2 certified; CC-TCG certified; China NationZ TPM available as a separate SKU for government-regulated deployments
-
Secured Component Verification (SCV) — factory-to-site hardware integrity verification; digitally attests that components have not been substituted or tampered with during shipping or storage
-
Data at Rest Encryption (D@RE) — self-encrypting drives (SEDs) with local key management or KMIP-compliant external key management server integration
-
System Lockdown — firmware and configuration lockdown mode prevents unauthorized changes; requires iDRAC9 Enterprise or Datacenter license
-
Secure Erase — cryptographic erase of all storage devices (SSD, NVMe, HDD) for data sanitization before decommission; Multi-Factor Authentication and Role-Based Access Control also supported
iDRAC9 — Comprehensive Lifecycle Management & Automation
-
iDRAC9 — Integrated Dell Remote Access Controller 9; Express license standard on 600+ series; Enterprise and Datacenter license upgrades add advanced telemetry, lockdown, and vConsole features
-
RESTful API with Redfish — DMTF-standard Redfish v1.x open API for infrastructure automation and integration with Red Hat Ansible, Terraform, Chef, Puppet, and PowerShell DSC
-
Zero Touch Provisioning (ZTP) — automated server discovery and bare-metal configuration via iDRAC without manual network or BIOS intervention; ideal for large-scale GPU cluster rollouts
-
Dell Lifecycle Controller — embedded firmware management for updates, OS deployment, hardware configuration, and diagnostics without a separate agent or OS
-
OpenManage suite — OpenManage Enterprise, Update Manager, SupportAssist, CloudIQ for PowerEdge, VMware vCenter integration, Microsoft SCOM plug-in, and Nagios integration
-
iDRAC Direct (Micro-USB) — front-panel Micro-USB port for direct laptop-to-iDRAC management without network infrastructure; useful during initial configuration or rack staging
-
Quick Sync 2 (optional) — Bluetooth 5.0 / NFC front-panel module; OpenManage Mobile app enables wireless inventory, configuration, and health checks directly from a smartphone
-
Protocol support — IPMI 2.0, RACADM CLI, ACPI v6.4, UEFI v2.7, SMBIOS v3.3.0, serial and video console redirection; full KVM over IP with Datacenter license
Certified for Leading Enterprise Operating Systems & Hypervisors
-
Canonical Ubuntu Server LTS — top choice for AI/ML workloads; certified for current LTS releases; excellent GPU driver ecosystem for NVIDIA CUDA and AMD ROCm stacks
-
Microsoft Windows Server + Hyper-V — full Hyper-V virtualization support; certified for all current Windows Server versions with Dell OpenManage integration
-
Red Hat Enterprise Linux (RHEL) — Tier 1 certified; full OpenManage plug-in and Ansible automation support for enterprise Linux deployments
-
SUSE Linux Enterprise Server (SLES) — Tier 1 certified; preferred for HPC cluster environments and SAP HANA AI workloads
-
VMware ESXi — certified hypervisor; full integration with OpenManage Integration for vCenter, vRealize Operations Manager, and VMware Cloud Foundation
-
Container platforms — certified for Kubernetes (via Kubeflow and NVIDIA GPU Operator) and OpenShift; enables GPU sharing and multi-tenant AI workloads on shared infrastructure
-
Full OS matrix — see Dell.com/OSsupport for current certification tables by OS version and GPU card combination

R760xa vs. R750xa — The Case for Upgrading to Gen 16
-
4th Gen vs. 3rd Gen Xeon — Sapphire Rapids replaces Ice Lake; higher UPI bandwidth (up to 16 GT/s vs. 11.2 GT/s), broader PCIe Gen5 native support, and MCC/XCC topology improvements for GPU workloads
-
DDR5 replaces DDR4 — up to 4800 MT/s DDR5 vs. 3200 MT/s DDR4; substantially higher memory bandwidth for AI model loading, large batch inference, and in-memory data pipelines
-
Higher GPU TDP support — 350 W double-wide GPUs vs. 300 W on the R750xa; enables the latest NVIDIA H100/H200 and AMD MI300X high-performance AI accelerators
-
12 vs. 8 SW GPU slots — the R760xa supports up to 12 single-wide GPUs vs. 8 on the R750xa; 50% more density for inferencing-optimized deployments
-
E3.S Gen5 NVMe storage option — new EDSFF form factor at PCIe Gen5 speeds; not available on any Gen 15 platform; highest per-slot storage bandwidth for AI data pipelines
-
BOSS-N1 replaces BOSS-S2 — NVMe M.2 hardware RAID-1 boot module replaces SATA M.2; significantly faster OS boot and lifecycle management operations
-
Deeper chassis — 946.73 mm (R760xa) vs. 908.64 mm (R750xa) with bezel; verify rack depth before planning a Gen 15-to-Gen 16 migration
| Feature | R750xa (Gen 15) | R760xa (Gen 16) |
|---|---|---|
| Processor Generation | 3rd Gen Intel Xeon Scalable (Ice Lake) | 4th Gen Intel Xeon Scalable (Sapphire Rapids) |
| Memory Type | DDR4 RDIMM/LRDIMM + Optane PMem | DDR5 RDIMM only, up to 4800 MT/s |
| Max DW GPU TDP | 300 W × 4 | 350 W × 4 |
| Max SW GPUs | 8 × single-wide | 12 × single-wide |
| Front Storage Options | 8 × 2.5” SAS/SATA/NVMe only | 6 × NVMe, 8 × SAS/NVMe, or 6 × E3.S Gen5 |
| Boot Module | BOSS-S2 (SATA M.2 HWRAID) | BOSS-N1 (NVMe M.2 HWRAID) |
| Max PSU | 2800 W Titanium | 2800 W Titanium + 2400 W Platinum tier |
| Chassis Depth (with bezel) | 908.64 mm (35.77”) | 946.73 mm (37.27”) |
Standard 2U — Built for Dense 19-Inch Rack Environments
-
Height: 86.8 mm (3.41 inches) — standard 2U; fits any EIA-310-E compliant 4-post 19-inch rack at full density
-
Width: 482 mm (18.97 inches) — standard 19-inch rack width; ear-to-ear chassis width 434 mm (17.08 inches)
-
Depth: 946.73 mm (37.27 inches) with bezel — the deepest chassis in the Gen 16 dual-socket GPU line; verify rack depth before deploying fully loaded GPU configurations
-
Max weight: 27.5 kg (60.63 lbs) fully loaded — 2-person server lift or mechanical lift strongly recommended for safe rack installation
-
ReadyRails sliding rail kit — tool-less for Dell Titan racks; adjustable for EIA-310-E square and unthreaded round hole flanges on any standard 19-inch 4-post rack
-
Static rail option — 2-post and 4-post rack compatibility; ideal for fixed deployments where tool-less sliding access is not required
-
Cable Management Arm (CMA) available for rear cable organization — NOTE: CMA is not supported in Direct Liquid Cooling (DLC) configurations
-
Quick Resource Locator (QRL) label on chassis — scan with a smartphone to instantly access product documentation, rail installation guides, and iDRAC credentials
Dell ProSupport & ProDeploy — Enterprise Services for GPU Infrastructure
-
Dell ProSupport Next Business Day (NBD) — standard 3-year support with 24×7 phone access and next-business-day on-site parts and labor at no additional charge
-
ProSupport Plus — adds predictive analytics, proactive monitoring, and SupportAssist automated case creation; ideal for production GPU clusters where unplanned downtime is costly
-
ProDeploy Basic — hardware installation and initial firmware update by Dell-certified technicians during standard business hours
-
ProDeploy — hardware installation plus OS and GPU driver configuration using Dell-certified deployment engineers; recommended for complex GPU cluster commissioning
-
ProDeploy Factory Configuration — systems shipped with custom OS images, BIOS settings, GPU provisioning, and asset tags pre-applied; minimizes on-site setup time for volume GPU server deployments
-
ProDeploy Rack Integration — complete rack builds with hardware installation, cabling, and system configuration completed before the rack ships to your facility
-
Dell Residency Services — embedded Dell experts work alongside your AI/HPC team for knowledge transfer and hands-on operational support during production go-live
-
APEX Flex on Demand — consume-as-you-go payment model for the R760xa; scale payments to match actual GPU utilization, reducing upfront capital commitment for expanding AI workloads

Frequently Asked Questions — Dell PowerEdge R760XA
The Dell PowerEdge R760XA supports up to 8 TB of DDR5 ECC memory across 32 DIMM slots (16 per socket) at up to 4800 MT/s — engineered for AI training, GPU-accelerated simulation, and memory-hungry deep learning frameworks running alongside PCIe Gen 5 accelerators.
The R760XA supports up to 4 × double-width PCIe Gen 5 GPU cards or a mix of double- and single-width accelerators including NVIDIA H100 PCIe, A100, L40, and RTX Ada generation cards — maximising rack-unit efficiency for GPU-dense clusters.
The R760XA supports 4th Gen Intel Xeon Scalable processors (Sapphire Rapids, LGA4677) in a dual-socket configuration with up to 56 cores per CPU. High core counts and PCIe Gen 5.0 bandwidth ensure CPUs don’t become bottlenecks for GPU feeding and data preprocessing pipelines.
Yes. Express Computer Systems stocks professionally reconditioned Dell PowerEdge R760XA rack servers optimised for AI and GPU workloads. Configure a refurbished Dell R760XA at ECS.
The R760XA (Gen 16) upgrades the R750XA (Gen 15) with 4th Gen Xeon Scalable vs 3rd Gen, PCIe Gen 5.0 vs Gen 4.0 (critical for feeding next-generation GPUs at full bandwidth), DDR5 vs DDR4 with 8 TB capacity, support for NVIDIA H100 PCIe configurations, and iDRAC9 for modern infrastructure automation.
Ready to Deploy the Dell PowerEdge R760xa?
Express Computer Systems specializes in professionally reconditioned Dell PowerEdge servers — fully tested, configured to your exact GPU, memory, and storage specifications, and ready to ship. Get Gen 16 AI acceleration performance at a fraction of the new-unit price.
Start building your custom server today































