Current Configuration
(List below serves as quick links to each section)
Select Chassis
Select Processor
Rugged Heat Manager (HM)
Memory
Storage Options
M.2 Drives
Riser Configuration
Rail Kits
Trusted Platform Module
iDRAC9 Management
Power Supply Units (PSU)
ECS Warranty
Dell PowerEdge XR8000 Rugged Rack Server CTO
Quick Specs
XR8610t: 1U Node Server
XR8620t: 2U Rack Server
1 x 4th Generation Intel® Xeon® Scalable processor with optional vRAN boost up to 32 cores
Memory | Max: 512GB8 DDR5 RDIMM 4800Mhz
Storage | Max: N/AXR8610t: BOSS N1 with 2 x 2280 NVMe SSDs
XR8620t: BOSS N1 with 2 x 2280 NVMe SSDs RAID 0/1
OR, 2x Dual M.2 NVMe direct Riser Module (non-RAID)
BOSS-N1 with RAID 0/1
Boot Options:BOSS N1
LOM:2 x Broadcom 57414 25 GbE SFP28 on board LOM (Optional)
Remote Management:iDRAC9 with Lifecycle Controller
PCIe Slots:XR8610t: 1 x16 Gen5
XR8620t: 3 x16 Gen5
None
- Dell PowerEdge XR12
- Dell PowerEdge XR4000
- Dell PowerEdge XR5610
- Dell PowerEdge XR7620
- Dell PowerEdge XR8000
(Click to View)
Ideal for:
- Centralized RAN
- Distributed RAN
- Edge Servers
- Military
- Point-of-Sale (POS)
Experience ultimate flexibility at the edge with Dell’s purpose-built server featuring an innovative sled-based architecture positioned to take us farther than any other Edge product. Designed for a broad range of workloads with the flexibility to extend from RAN at the cell site to AI/ML on a factory floor. This short-depth Class 1 server delivers high-performance compute in the harshest of environments with TCO at the center of every design decision.

Choose from multiple sled configurations with up to 4 nodes per chassis to meet all your growing and changing workloads.
- 1U and 2U sled options for optimum compute configurations enabling common platforms across Core, Edge, Far-edge, and RAN
- 4th generation Intel® Xeon® Scalable Processors up with optional vRAN boost upto 32 cores
- Both DC and AC power supply options
Reduce support and maintenance costs, decrease service outages, and capitalize on energy savings
- Advanced system power design enables dual PSUs to support up to 4 sleds
- Integrated fan infrastructure to support higher efficiency cooling
- Compute, power, sleds and chassis are serviceable separately simplifying field maintenance

Achieve high-performance compute in any environment.
- BBuilt rugged to operate in temperatures from -5°C to 55°C
- Offers easy serviceability with front I/O and power
- Short-depth (430 mm) chassis to fit in space-constrained environments
Security is integrated into every phase of the PowerEdge lifecycle, including protected supply chain and factory-to-site integrity assurance. Silicon-based root of trust anchors end-to-end boot resilience while Multi-Factor Authentication (MFA) and role-based access controls ensure trusted operations.

- 4th generation Intel® Xeon® Scalable Processor and with optional vRAN boost up to 32 cores
- 1 x FHHL (up to 45W) (Gen5)
- Up to 2 x M.2 NVMe BOSS N1 ET
- 8 memory slots; up to 512 GB total
- 5°C to 55°C operating temperature
- Hot-swap fans are not supported.
- Minimum cold boot temperature +5°C.
- Do not perform a cold startup below 5°C.
- DIMM Blank is required in empty slots.
- Sled blank is required in empty slots.
- PCIE blank is required in empty slot (slot 1).
- PSU blank is required in empty slots

- 4th generation Intel® Xeon® Scalable Processor and with optional vRAN boost up to 32 cores
- 2 x FHHL (up to 125W) 1 x FHHL (up to 45W) (Gen5)
- Up to 2 x M.2 NVMe BOSS N1 ET
- 8 memory slots; up to 512 GB total
- Optional RAID
- -5°C to 55°C operating temperature
- Hot-swap fans are not supported.
- Minimum cold boot temperature +5°C w/o Heater Manager subsystem.
- Minimum cold boot temperature -20°C with Heater Manager subsystem.
- Dual PSUs are required while ambient ≥ 55°C.
- Only PSU with eTemp range is supported for NEBS3 H, GR3108C1 L & GR3108C1+ environment class.
- Non-Dell PCIe Cards are not supported.
- DIMM Blank is required in empty slots.
- Sled blank is required in empty slots.
- PCIE blank is required in empty slots for slot-1&2.
- PCIE blank is required in empty slot for slot-3.
- PSU blank is required in empty slot.
- Heater Module is not supported with Intel Ethernet 100G 2P E810-2C.
The XR8000r is a 2U Modular Rack Enclosure that supports up to 4x 1U Nodes, 2x 2U Nodes or a mix of both. 2x 1U + 1x 2U Nodes. Each node has 1 CPU and up to 2x M.2 NVMe Drives with 8 DDR5 RDIMMs to support up to 512GB of memory per node.
Support for two 2U nodes, four 1U nodes, or a mix of two 1U and one 2U node
- 4 x 1U Half-width sleds
- 2 x 1U Half-width sled and 1 x 2U Half-width sled
- 2 x 2U Half-width sleds

Both Sleds support 1 CPU with a standard heatsink. The PowerEdge XR8610t system supports up to four cabled cooling fans.
The PowerEdge XR8620t system supports up to eight cabled cooling fans.


No matter which configuration you choose. Each node can support up to 8 DDR5 channels with 1 DIMM Per channel; 8 DIMMs total. These 8 DIMMS can reach speeds up to 4800MT/s (configuration-dependent). The maximum amount per DIMM is 64GB with a total of 512GB with 4 Nodes.
| DIMM Type | Rated DIMM Speed (MT/s) | DIMM Volts (V) | DIMM Capacity (GB) | Ranks per DIMM | Data Width |
|---|---|---|---|---|---|
| RDIMM | 4800 | 1.1 | 16 | 1 | x8 |
| 32 | 2 | x8 | |||
| 64 | 2 | x4 |
The PowerEdge XR8000 does not have a “no riser” option and only supports one riser config option: Riser 1 on 1U/2U sled, and Floating Riser on 2U sled. Shown below are details on the two risers supported on the PowerEdge XR8000. XR8610t 1 x16 PCIe (Gen5) slot
XR8620t 3 x16 PCIe (Gen5) slots
| Riser Configs | PCIe slots | Expansion card riser | Processor connection | Height | Length | Slot width |
|---|---|---|---|---|---|---|
| XR8610t 1U Riser Config R2A |
Slot 1 | R2A | Processor 1 | Full Height | Half Length | x16 PCIe (Gen5) Single width |
| XR8620t 2U Riser Config R1A + R2A |
Slot 1 | R1A | Processor 1 | Full Height | Half Length | x16 PCIe (Gen5) |
| Slot 2 | R1A | Processor 1 | Full Height | Half Length | x16 PCIe (Gen5) | |
| Slot 3 | R2A | Processor 1 | Full Height | Half Length | x16 PCIe (Gen5) |
Riser 2A (R2A)The 1U R2A Riser connects to the System Board with a dedicated PCIe slot. This Riser can also be installed in the 2U XR8620t node system. This riser has 1 x16 PCIe (Gen5) Single width FH/HL Slot.
Riser 1A (R1A)The 2U R1A Riser connects the riser card power cables to 2U PDB and 4x signal cables to connectors on the system board. This riser has 2 x16 PCIe (Gen5) Single width FH/HL Slot.
No matter which Node/Sled you choose, the system can be configured with or without a Broadcom 57414 2x 25 GbE SFP28 onboard LOM (LAN on Motherboard). These networking ports are optional. The option "NC" (Network Choice) indicates that the system board does not include the 2x 25 Gbps ports. In this case, you will need to install a PCIe NIC to enable the server to connect to the network.

Both Sleds support the BOSS-N1 M.2 Carrier and the 2U XR8620t also support a NON-RAID M.2 Riser. The BOSS-N1 Riser holds 2 x M.2 2280 and supports RAID 0/1. The XR8620t M.2 NON-RAID Riser can support 2 x M.2 2280 and M.2 22110 drives.
The BOSS-N1 riser sits on the bottom 1U sled while the M.2 NVMe Riser sits on the second level of the 2U Sled. The 2U XR8620t can only support 1 at a time while the XR8610t only supports the BOSS-N1 option.
Mancini BOSS-N1 | M2 2280Both Sled Servers support this BOSS-N1 option. This is the only option for RAID support for these Nodes. The BOSS-N1 can support RAID 0 or 1.
M.2 NVMe Drives Supported:
- 480GB
- 800GB
- 960GB
Non-Raid M.2 Riser | M2 2280/22110The M.2 Riser is the Second option for the 2u XR8620t Node Server. Only one of these storage options can be installed at a time. The Power Interposer board is how the cards connect to the system board.
M.2 NVMe Drives Supported:
- 480GB
- 800GB
- 960GB
- 1.92TB
RAID on Riser | RoR-N1Dell RAID on RISER-N1 (RoR-N1) is a RAID solution that is designed to provide boot and data store support. Dell Technologies recommends that you use the ROR-N1 controllers only as an operating system boot device.
M.2 NVMe Drives Supported:
- 480GB
- 960GB
Heater Manager subsystem (HM) will pre-heat the system to above 5C before the system can power on.
With a system like the XR8000 that supports Radio Access Network (RAN), these Telco systems are deployed in remote locations where the systems need to operate in extended (-20C to 39 65C) or extreme (-20C to 55C) temperature ranges. As many of the hardware components (such as iDRAC, 40 CPU, DIMM, SSD, etc.) used in the system cannot operate below 0C, the system needs to be pre-heated to above 5C before the system can power on. The Heater Manager subsystem (HM) will pre-heat the system, and make sure all the heater zones (9 zones total) are above 5C. The HM will heat the system as needed such that the temperature of all zones will be above 5C.
Restrictions- Heater Manager subsystem (HM) is only supported in 2U XR8620t.
- Heater Manager subsystem (HM) heating from -20C to system start booting: 4 minutes
- The max power draw per 2U HM sled is about 750W during the preheat process
| System | Zone | Heating Item | Heater pad location |
|---|---|---|---|
| Lower U | 1 | CPU | Top of CPU heatsink |
| 2 | DIMM | Back of PCB | |
| 3 | DIMM | Back of PCB | |
| 4 | Lower U PCIe Riser Slot3 | PCIe back of PCB | |
| 5 | PCH, BMC, CPLD, and LOM | Back of PCB | |
| 6 | Not Used | - | |
| Upper U | 7 | ROR-N1 or M.2 NVMe A side | Under AL plate & beneath M.2 |
| 7 | ROR-N1 or M.2 NVMe B side | Under AL plate & beneath M.2 | |
| 8 | ROR-N1 RAID chip | Top of Raid Controller | |
| 9 | Upper U PCIE Riser Slot2 | On the back of PCIe Card | |
| 10 | Upper U PCIE Riser Slot1 | On back of Bracket |
HM Zones
Zone 1 HM CPU
Zone 2 HM DIMM
Zone 4 Riser 2 Slot 3
Zone 7 M.2 Riser
Zone 7 & 8 RoR-N1
Zone 9 & 10 Riser 1 Slot 1 & 2The XR8000r Modular Rack Server only supports two 60MM Reverse Airflow (RAF) PSUs. AC and DC options are available. Do not mix PSU. The PSUs with blue straps are designed for reverse airflow (RAF). This chassis has front access to the node servers and the PSUs. This means the Fans are in the rear of the server nodes and they pull air out of the chassis. Normal AirFlow (NAF) Pushes Airflow into the chassis.

- Height 87.05 mm (3.42 inches)
- Width 482 mm (18.97 inches) with mount ear
- Width 448 mm (17.63 inches) without mount ear
- Depth 430 mm (16.92 inches) cable management to rear wall
- Depth 350 mm (13.77 inches) mounting surface to rear wall
- Weight 18.52 kg (40.84 pounds)
- Height 41.25 mm (1.62 inches)
- Width 184.8 mm (7.28 inches)
- Depth 433.5 mm (17.07 inches)
- Weight 3.90 kg (8.59 pounds)
- Height 83.28 mm (3.28 inches)
- Width 184.8 mm (7.28 inches)
- Depth 433.5 mm (17.07 inches)
- Weight 5.25 kg (11.57 pound)
Current Configuration
(List below serves as quick links to each section)







































