Current Configuration
(List below serves as quick links to each section)
HPE Chassis
HPE Processors
GPU XL675d Modular
GPU XL675d PCIe
GPU XL645d Modular
GPU XL645d PCIe
Drive Options
Cable Kit XL675d
Cable Kit XL645d
HPE Memory
HPE SFF Drives SAS (2.5in)
HPE SFF Drives SATA (2.5in)
HPE NVMe
Networking / InfiniBand Adapters XL675d
Networking / InfiniBand Adapters XL645d
Storage Controller XL675d
Storage Controller XL645d
iLO Server Management
Power Cable Kits
Power Supply Unit (PSU) PCIe
Power Supply Unit (PSU) Modular
Power Supply Unit (PSU) Individual Replacements
Select Warranty
HPE ProLiant Apollo 6500 Server (G10+) CTO
-PLEASE CONTACT FOR CUSTOM CONFIGURATIONS. DISK SHELVES, COMPLETE CONFIGURATIONS & TURN KEY SYSTEMS AVAILABLE.
-SPARE PARTS AVAILABLE FOR EACH MODEL. PLEASE ASK FOR PRICING ON SPARE DISK, CONTROLLERS, POWER SUPPLIES, NICs, HBAs, SFPs, Cables, etc.
What's New
- Apollo d6500 Gen10 Plus Chassis | P19674-B21
- (2) Node Options | XL645d & XL675d
- (2) AMD 2nd and 3rd Gen EPYC™ Series Processors
- (32) 3200 MT/s DDR4 DIMMs
- (16) SFF Drives
- Up to (12) NVMe Supported
- Up to (16 PCIe) or (8 SXM4) GPUs Supported
- Up to (6) PSUs
- HPE NS204i Boot Supported
- Direct Liquid Cooling System fully integrated
- AMD Instinct™ MI00 with 2nd Gen Infinity Fabric™ Link
- NVIDIA H100 and AMD MI210 PCIe GPU support
- HPE iLO 5
Ideal for:
- Artificial Intelligence (AI)
- Big Data Analytics
- Data Analytics
- Deep Learning (DL)
- High-Performance Computing (HPC)
- Language Model
- Machine Learning (ML)
The HPE Apollo 6500 Gen10 Plus chassis is a 6U platform designed for extreme performance and scalability, purpose-built to support demanding workloads like HPC, AI, and deep learning. It houses up to two server nodes and supports flexible GPU configurations, enabling the integration of either PCIe or SXM GPU architectures. The chassis offers advanced cooling solutions, including Direct Liquid Cooling (DLC), which enhances power efficiency and ensures optimal thermal management even under intense computational loads.
The HPE ProLiant XL675d Gen10 Plus node is a dual-socket system optimized for maximum GPU density and performance. It supports up to 8 NVIDIA HGX A100 GPUs in the SXM configuration or up to 10 double-wide PCIe GPUs, making it a powerhouse for deep learning, AI training, and complex HPC simulations. Powered by dual AMD EPYC processors with up to 64 cores each, it delivers exceptional processing power paired with support for up to 4 TB of DDR4 memory.
The HPE ProLiant XL645d Gen10 Plus node is a single-socket system designed for flexible and scalable GPU deployments. It supports up to 8 single-wide or 4 double-wide PCIe GPUs, making it well-suited for machine learning inference, data analytics, and virtualized environments. With a single AMD EPYC processor offering up to 64 cores and support for up to 2 TB of DDR4 memory, the XL645d provides a balanced mix of performance and efficiency.

- (16) Drives over (2) Drive bays
- Supports up to (6) PSUs
- Supports (1) Full-Width XL675d Node
- Supports (2) Half-Width XL465d Node
- Support for (4, 8, 10, or 16) GPUs
- Support for SXM4 & PCIe GPUs
Different GPU types cannot be mixed
HPE XL675d Full Width Node P19725-B21Apollo 6500 Supports (1)
(2) AMD 2nd or 3rd Gen EPYC
(32) DDR4 3200 MT/s DIMMs
(1) OCP 3.0 Dedicated Slot
(6) PCIe 4.0 Slots
- Support for (10 DW, or 16 SW) PCIe GPUs
- Support for (8) SXM4 GPUs
-
HPE ProLiant XL675d Service Guide
-
HPE ProLiant XL675d User Guide
HPE XL645d Half Width Node P19726-B21Apollo 6500 Supports (2)
(1) AMD 2nd or 3rd Gen EPYC
(16) DDR4 3200 MT/s DIMMs
(1) OCP 3.0 Dedicated Slot
(2) PCIe 4.0 Slots
- Support for (4 DW, or 8 SW) PCIe GPUs
- Not Suppored SXM4 GPUs
-
HPE ProLiant XL645d Service Guide
-
HPE ProLiant XL645d User Guide
HPE Apollo 6500 Gen10 PLUS Maximum Internal Storage
| Drive Type | XL675d Capacity | XL645d Capacity |
|---|---|---|
| Hot Plug SFF SATA HDD | 16 x 2 TB = 32 TB | 8 x 2 TB = 16 TB |
| Hot Plug SFF SAS HDD | 16 x 2 TB = 32 TB | 8 x 2 TB = 16 TB |
| Hot Plug SFF NVMe PCIe SSD | 16 x 15.36 TB = 245.76 TB | 16 x 15.36 TB = 245.76 TB |
| Hot Plug SFF SATA SSD | 16 x 7.68 TB = 122.88 TB | 8 x 7.68 TB = 61.44 TB |
| Hot Plug SFF SAS SSD | 16 x 15.3 TB = 244.8 TB | 8 x 15.3 TB = 122.4 TB |
HPE Storage Controllers are a bit different for these Node serves. The XL645d does not support Flexible Smart Array cards in its Half-Width Node Server. But the XL675d does support a Flexible Smart Array. Both Support the optional M.2 Boot Device although a different card same outcome. Below will be a list of the Cable kits required for each Hard Drive configuration.
| HPE ProLiant XL675d Storage Controller Cable Kits | ||||
|---|---|---|---|---|
| HPE Storage Configuration | Main Cable Kit | Addt. Cable Kit | Main Backplane | Addt. Backplane |
| 8 Embedded SR100i SATA | P31480-B21 | P25877-B21 | ||
| + Up to 8 NVMe - Switch Direct Attached | P31480-B21 | P31491-B21 | P25877-B21 | P25879-B21 |
| 8 SAS/SATA (AROC) | P27764-B21 | P25877-B21 | ||
| 8 SAS/SATA (AROC) + Up to 8 NVMe - Switch Direct Attached | P27764-B21 | P31491-B21 | P25877-B21 | P25879-B21 |
| 16 SAS/SATA (AROC) | P31490-B21 | P25877-B21 | P25877-B21 | |
| Up to 8 NVMe – Switch Direct Attached | P31491-B21 | P25879-B21 | ||
| 2 Embedded SATA + 6 NVMe - Switch Direct Attached | P39951-B21 | P25879-B21 | ||
| 2 SAS/SATA (AROC) + 6 NVMe - Switch Direct Attached | P39952-B21 | P25879-B21 | ||
| 2 Embedded SATA | ||||
| + 6 NVMe - CPU Direct Attached | P27283-B21 | P25879-B21 | ||
HPE Apollo 6500 Gen10 Plus XL675d Cable Kits:
- HPE XL675d Gen10 Plus 8SFF CPU Connected x4 NMVe Cable Kit (P27279-B21)
- HPE XL675d Gen10 Plus 8SFF CPU Connected x4 NMVe and 8SFF Switch Connected x4 NVMe Cable Kit (P27280-B21)
- HPE XL675d Gen10 Plus 2SFF Smart Array SAS and 6SFF CPU Connected x4 NVMe Cable Kit (P27281-B21)
- HPE XL675d Gen10 Plus 2SFF Embedded SATA and 6SFF CPU Connected x4 NVMe Cable Kit (P27283-B21)
- HPE XL675d Gen10 Plus 8SFF Smart Array SAS Cable Kit (P27764-B21)
- HPE XL675d Gen10 Plus 8SFF Smart Array SR100i SATA Cable Kit (P31480-B21)
- HPE XL675d Gen10 Plus 16SFF Smart Array SAS Cable Kit (P31490-B21)
- HPE XL675d Gen10 Plus 6SFF Switch Connected x4 NVMe Cable Kit (P31491-B21)
- • HPE XL675d Gen10 Plus 2SFF Embedded SATA and 6SFF Switch Connected x4 NVMe Cable Kit (P39951-B21)
- HPE XL675d Gen10 Plus 2SFF Smart Array SAS and 6SFF Switch Connected x4 NVMe Cable Kit (P39952-B21)
| HPE ProLiant XL645d Storage Controller Cable Kits | |||||
|---|---|---|---|---|---|
| HPE Storage Configuration (Per Node) | Cable Kit | Enablement Card | Backplane | M.2 Cable Kit | M.2 Enablement Card |
| 8 Embedded SATA | P31487-B21 | P25877-B21 | |||
| 8 SAS/SATA (Smart Array) | P31488-B21 | HPE Smart Array | P25877-B21 | ||
| 2 Embedded SATA + 2 x4 NVMe | P31483-B21 | P25879-B21 | |||
| 2 Embedded SATA + 6 x4 NVMe | P31482-B21 | P25879-B21 | |||
| 2 SAS/SATA (Smart Array) + 2 x4 NVMe | P31486-B21 | HPE Smart Array | P25879-B21 | ||
| No SFF Drives + NS204i-t M.2 Boot Device | P31481-B21 | P20292-B21 | |||
| 8 Embedded SATA + NS204i-t M.2 Boot Device | P31487-B21 | P25877-B21 | P31481-B21 | P20292-B21 | |
| 8 SAS/SATA (Smart Array) + NS204i-t M.2 Boot Device | P31488-B21 | HPE Smart Array | P25877-B21 | P31481-B21 | P20292-B21 |
| 2 SAS/SATA (Smart Array) + 4 x4 NVME | P31484-B21 | HPE Smart Array | P25879-B21 | ||
| 8 x4 NVMe Gen4 | P25883-B21 | P25879-B21 | |||
| 6 x4 NVMe + NS204i-t M.2 Boot Device | P25879-B21 | P48120-B21 | P20292-B21 | ||
| 8 SAS/SATA (Smart Array) + 6 x4 NVMe + NS204i-t M.2 Boot Device | P31488-B21 | HPE Smart Array | P25879-B21 | P48120-B21 | P20292-B21 |
| 2 x4 NVMe + NS204i-t M.2 Boot Device | P25879-B21 | P59752-B21 | P20292-B21 | ||
HPE Apollo 6500 Gen10 Plus XL645d Cable Kits:
- HPE XL645d Gen10 Plus 2SFF Smart Array SR100i SATA and 2SFF CPU Connected x4 NVMe Cable Kit (P31483-B21)
- HPE XL645d Gen10 Plus 2SFF Smart Array SAS and 2SFF CPU Connected x4 NVMe Cable Kit (P31486-B21)
- HPE XL645d Gen10 Plus 8SFF Embedded SATA Controller Cable Kit (P31487-B21)
- HPE XL645d Gen10 Plus 8SFF Smart Array SAS Cable Kit (P31488-B21)
- HPE XL645d Gen10 Plus M.2 Cable Kit (P31481-B21)
- HPE XL645d Gen 10 Plus 8SFF Embedded SATA Controller x4 NVMe Cable Kit (P25883-B21)
- HPE XL645d Gen 10 Plus 2SFF Smart Array SAS/SATA and 4SFF CPU Connected x4 NVMe Cable Kit (P31484-B21)
- HPE XL645d Gen 10 Plus 2SFF Smart Array SATA and 6 Switch Connected x4 NVMe Cable Kit (P31482-B21)
- HPE Apollo 6500 Gen10 Plus M.2 2 x 4NVMe Cable Kit (P59752-B21)
| HPE Proliant XL675d PCIe Fabric Riser - Primary, Secondary, or Tertiary Riser | |||||
|---|---|---|---|---|---|
| Slot # | Technology | Bus Width | Connector Width | Slot Form Factor | Supported CPU |
| 17 | PCIe 4.0 | x16 | x16 | Half-height, half-length slot | Processor 1 |
| 18 | Processor 1 | ||||
| 19 | Processor 1 or 2 | ||||
| 20 | Processor 1 or 2 | ||||
| 21 | Processor 1 or 2 | ||||
| 22 | Processor 1 or 2 | ||||
| XL675d PCIe GPU Riser | ||||||
|---|---|---|---|---|---|---|
| Slot # | Instinct™ MI100 GPU with 4x4 bridge | HGX™ A100 GPU with 2x2 bridge | PCIe Double Wide | PCIe Single Wide | Supported CPU | |
| 1 | P | P** | P | P | Processor 1 | |
| 2 | P | P | P | P | Processor 1 | |
| 3 | P | P | P | P | Processor 1 | |
| 4 | P | Processor 1 | ||||
| 5 | P | P | P | P | Processor 1 | |
| 6 | P | Processor 1 | ||||
| 7 | P | P | P | P | Processor 1 | |
| 8 | P | Processor 1 | ||||
| 9 | P | P | P | P | Processor 1 or 2 | |
| 10 | P | Processor 1 or 2 | ||||
| 11 | P | P | P | P | Processor 1 or 2 | |
| 12 | P | Processor 1 or 2 | ||||
| 13 | P | P | P | P | Processor 1 or 2 | |
| 14 | P | Processor 1 or 2 | ||||
| 15 | P | P | P | P | Processor 1 or 2 | |
| 16 | P | P** | P | P | Processor 1 or 2 | |
| Notes | Single Wide and Double Wide GPUs are not able to be installed together. Different GPU types cannot be mixed. Instinct™ MI100 with Infinity Flex 4x4 Bridge for HPE will follow the placement configuration: First set of four Bridged GPUs: GPU2, GPU3, GPU5, GPU7; Second set of four Bridged GPUs: GPU9, GPU11, GPU13, GPU15. ** The optimal configuration for the NVLINK bridges is 8 GPUs instead of 10, with the bridges installed in the following slot pairs: 2-3, 5-7, 9-11, and 13-15. Unbridged GPUs can still be installed in PCIe1 and PCIe16 when we have linked GPUs in the other slots |
|||||
HPE ProLiant XL645d GPU OptionsSupports (2) Half Width
Does Not Support SXM4 GPUs
Supports PCIe GPU bridging (4 per Bridge)
Max (4) DW (8) SW GPUs
Mixing of GPUs is not allowed
Air Cooling only
HPE ProLiant XL675d GPU OptionsSupports (1) Full Width
Support SXM4 & PCIe GPUs
Supports PCIe GPU bridging (4 per Bridge)
Max (10) DW (16) SW PCIe GPUs
Max (8) SXM4 GPUs
Mixing of GPUs is not allowed
Air Cooling & Liquid Cooling
15 - 80mm dual rotor hot pluggable chassis fans. These systems also have support for Air or Liquid cooled parts like the GPU and the CPU. There is no redundancy for the fans they all come standard in the 6500 Chassis.

It supports up to six 3000W Platinum hot-pluggable power supplies, providing full redundancy and scalability for various configurations. This system ensures reliable power delivery, even under peak loads. For GPU configurations, the power requirements vary depending on the type of GPU used. PCIe GPUs such as the NVIDIA A100 and AMD Instinct MI100 typically require up to 300W-350W per GPU and are powered through standard PCIe power cables. In contrast, SXM4 GPUs, such as the NVIDIA HGX A100, can demand up to 500W per GPU, necessitating direct liquid cooling (DLC) for thermal management and specialized power cabling integrated with the SXM GPU tray. The chassis design also incorporates shared power infrastructure at the node level, ensuring efficient power distribution across CPUs, GPUs, and other components.

Current Configuration
(List below serves as quick links to each section)

















































