HPE ProLiant XL675d Gen10 Plus Node Server Chassis | P19725-B21
Built for the Exascale Era the HPE Apollo 6500 Gen10 Plus Systems accelerates performance with powered by NVIDIA HGX A100 Tensor Core GPUs with NVLink or AMD Instinct™ MI200 with 2nd Gen Infinity Fabric™ Link to take on the most complex HPC and AI workloads. This purpose-built platform provides enhanced performance with premier GPUs, fast GPU interconnects, high-bandwidth fabric, and configurable GPU topology, providing rock-solid reliability, availability, and serviceability (RAS). Configure with single or dual processor options for a better balance of processor cores, memory, and I/O. Improve system flexibility with support for 4, 8, 10, or 16 GPUs and a broad selection of operating systems and options all within a customized design to reduce costs, improve reliability, and provide leading serviceability.
What’s New
- NVIDIA H100 and AMD MI210 PCIe GPU support
- AMD Instinct™ MI00 with 2nd Gen Infinity Fabric™ Link
- Direct Liquid Cooling System fully integrated, installed, and supported by HPE. Also supporting PCIe Gen4 GPUs provides extreme compute flexibility.
- Flexible support and options: InfiniBand, Ethernet, HPE Slingshot, Ubuntu and Enterprise OS such as Windows, VMware,
- Suse, Red Hat, Choice and HPE Pointnext for advisory, professional and operational services, along with flexible consumption model across the globe.
- Enterprise RAS with HPE iLO5, easy access modular design, and N+N power supplies.
- Save time and cost, gain improved user productivity with HPE iLO5
- World’s most secure industry standard server using HPE iLO5
Know your Server - CTO Configuration Support
Need help with the configuration? Contact us today!
HPE Apollo 6500 Gen10 PLUS Base Knowledge
HPE Apollo d6500 Gen10 Plus Chassis (P19674-B21) is a 4U server that is very powerful and a bit unique compared to the non-Plus version. Inside this chassis, you have the ability to install two different types of Nodes. The XL645d (P19726-B21) is a 2U Half Width Node that holds 1 Processor. Or, the XL675d (P19725-B21) which is also a 2U Node but is a full-width server that holds 2 Processors. Each of these Node servers has two Accelerator Trays, one must be installed above these node servers in the 6500 Gen10 Plus chassis. You can choose the SXM4 Modular Accelerator Tray or the PCIe GPU Accelerator Tray. Each tray has a different type of PSU and accepts different GPU-style cards. Also, each node type has a different CPU, Smart arrays, cabling, and configurations. Please keep track of which server type you have selected when going through the CTO.
This CTO is split by the Node server you choose, they are pretty similar but one major difference is the amount of GPUs the accelerator tray can handle and the number of total DIMMs supported. Mixing of RDIMM and LRDIMM memory is not supported.
XL675d Max Memory
32 Total DIMMS, 16 DIMM slots per processor, 8 channels per processor, 2 DIMMs per Channel
Maximum capacity (LRDIMM) 4.0 TB Up to 32 128 GB LRDIMM @ 3200 MT/s
Maximum capacity (RDIMM) 2.0 TB Up to 32 64 GB RDIMM @ 3200 MT/s
HPE Apollo 6500 Gen10 Plus Processors
Mixing of 2 different processor models is NOT allowed.
HPE Apollo 6500 Gen10 PLUS Maximum Internal Storage
Drive Type |
XL675d Capacity |
XL645d Capacity |
Hot Plug SFF SATA HDD |
16 x 2 TB = 32 TB |
8 x 2 TB = 16 TB |
Hot Plug SFF SAS HDD |
16 x 2 TB = 32 TB |
8 x 2 TB = 16 TB |
Hot Plug SFF NVMe PCIe SSD |
16 x 15.36 TB = 245.76 TB |
16 x 15.36 TB = 245.76 TB |
Hot Plug SFF SATA SSD |
16 x 7.68 TB = 122.88 TB |
8 x 7.68 TB = 61.44 TB |
Hot Plug SFF SAS SSD |
16 x 15.3 TB = 244.8 TB |
8 x 15.3 TB = 122.4 TB |
HPE Storage Controllers are a bit different for these Node serves. The XL645d does not support Flexible Smart Array cards in its Half-Width Node Server. But the XL675d does support a Flexible Smart Array. Both Support the optional M.2 Boot Device although a different card same outcome. Below will be a list of the Cable kits required for each Hard Drive configuration.
HPE ProLiant XL675d Storage Controller Cable Kits |
HPE Storage Configuration |
Main Cable Kit |
Addt. Cable Kit |
Main Backplane |
Addt. Backplane |
8 Embedded SR100i SATA |
P31480-B21 |
|
P25877-B21 |
|
+ Up to 8 NVMe - Switch Direct Attached |
P31480-B21 |
P31491-B21 |
P25877-B21 |
P25879-B21 |
8 SAS/SATA (AROC) |
P27764-B21 |
|
P25877-B21 |
|
8 SAS/SATA (AROC) + Up to 8 NVMe - Switch Direct Attached |
P27764-B21 |
P31491-B21 |
P25877-B21 |
P25879-B21 |
16 SAS/SATA (AROC) |
P31490-B21 |
|
P25877-B21 |
P25877-B21 |
Up to 8 NVMe – Switch Direct Attached |
P31491-B21 |
|
P25879-B21 |
|
2 Embedded SATA + 6 NVMe - Switch Direct Attached |
P39951-B21 |
|
P25879-B21 |
|
2 SAS/SATA (AROC) + 6 NVMe - Switch Direct Attached |
P39952-B21 |
|
P25879-B21 |
|
2 Embedded SATA |
|
|
|
|
+ 6 NVMe - CPU Direct Attached |
P27283-B21 |
|
P25879-B21 |
|
HPE Apollo 6500 Gen10 Plus XL675d Cable Kits:
- HPE XL675d Gen10 Plus 8SFF CPU Connected x4 NMVe Cable Kit (P27279-B21)
- HPE XL675d Gen10 Plus 8SFF CPU Connected x4 NMVe and 8SFF Switch Connected x4 NVMe Cable Kit (P27280-B21)
- HPE XL675d Gen10 Plus 2SFF Smart Array SAS and 6SFF CPU Connected x4 NVMe Cable Kit (P27281-B21)
- HPE XL675d Gen10 Plus 2SFF Embedded SATA and 6SFF CPU Connected x4 NVMe Cable Kit (P27283-B21)
- HPE XL675d Gen10 Plus 8SFF Smart Array SAS Cable Kit (P27764-B21)
- HPE XL675d Gen10 Plus 8SFF Smart Array SR100i SATA Cable Kit (P31480-B21)
- HPE XL675d Gen10 Plus 16SFF Smart Array SAS Cable Kit (P31490-B21)
- HPE XL675d Gen10 Plus 6SFF Switch Connected x4 NVMe Cable Kit (P31491-B21)
- • HPE XL675d Gen10 Plus 2SFF Embedded SATA and 6SFF Switch Connected x4 NVMe Cable Kit (P39951-B21)
- HPE XL675d Gen10 Plus 2SFF Smart Array SAS and 6SFF Switch Connected x4 NVMe Cable Kit (P39952-B21)
HPE Apollo 6500 Gen10 PLUS PCI Risers
HPE Proliant XL675d PCIe Fabric Riser - Primary, Secondary, or Tertiary Riser |
Slot # |
Technology |
Bus Width |
Connector Width |
Slot Form Factor |
Supported CPU |
17 |
PCIe 4.0 |
x16 |
x16 |
Half-height, half-length slot |
Processor 1 |
18 |
Processor 1 |
19 |
Processor 1 or 2 |
20 |
Processor 1 or 2 |
21 |
Processor 1 or 2 |
22 |
Processor 1 or 2 |
XL675d PCIe GPU Riser |
Slot # |
Instinct™ MI100 GPU with 4x4 bridge |
HGX™ A100 GPU with 2x2 bridge |
PCIe Double Wide |
PCIe Single Wide |
Supported CPU |
1 |
P |
P** |
P |
P |
Processor 1 |
2 |
P |
P |
P |
P |
Processor 1 |
3 |
P |
P |
P |
P |
Processor 1 |
4 |
|
|
|
P |
Processor 1 |
5 |
P |
P |
P |
P |
Processor 1 |
6 |
|
|
|
P |
Processor 1 |
7 |
P |
P |
P |
P |
Processor 1 |
8 |
|
|
|
P |
Processor 1 |
9 |
P |
P |
P |
P |
Processor 1 or 2 |
10 |
|
|
|
P |
Processor 1 or 2 |
11 |
P |
P |
P |
P |
Processor 1 or 2 |
12 |
|
|
|
P |
Processor 1 or 2 |
13 |
P |
P |
P |
P |
Processor 1 or 2 |
14 |
|
|
|
P |
Processor 1 or 2 |
15 |
P |
P |
P |
P |
Processor 1 or 2 |
16 |
P |
P** |
P |
P |
Processor 1 or 2 |
Notes |
Single Wide and Double Wide GPUs are not able to be installed together. Different GPU types cannot be mixed. Instinct™ MI100 with Infinity Flex 4x4 Bridge for HPE will follow the placement configuration: First set of four Bridged GPUs: GPU2, GPU3, GPU5, GPU7; Second set of four Bridged GPUs: GPU9, GPU11, GPU13, GPU15. ** The optimal configuration for the NVLINK bridges is 8 GPUs instead of 10, with the bridges installed in the following slot pairs: 2-3, 5-7, 9-11, and 13-15. Unbridged GPUs can still be installed in PCIe1 and PCIe16 when we have linked GPUs in the other slots |
XL645d can only support 4 GPUs per server a total of 8 when two nodes are installed.
HPE NVIDIA GPU
This server is all about the GPU configuration and the ability to support SXM4 of PCI GPUs. Please take note as to which GPU you have installed and the desired configuration when choosing the GPU Accessories on this CTO.
Mixing of GPUs is not allowed.
XL675d PCI GPU Rules
Please select (1) PCIe Accelerator and Bracket Cable Kit (P27285-B21) per GPU.
If you choose the PCI accelerator Tray and install the H100 GPU, you need to select (1) Cable kit per GPU (P60567-B21)
If you install the Ml100 GPU, please select (1) Bridge for every 4 GPUs.
If you choose the A100, please select (3) Ampere 2-way 2-slot Bridges (R6V66A) for every pair of GPUs.
XL675d Modular GPU Rules
You have the choice between Air Cooled or Liquid Cooled GPUs. If you install any liquid-cooled A100 GPUs, the HPE XL675d Gen10+ FIO HS for NVIDIA GPU (P36886-L24) must be installed.
XL645d PCI GPU Rules
For each PCI GPU, you must also order this PCIe Accelerator and Bracket v2 Cable Kit P27282-B21.
If you install the Ml100 GPU you will need to install (1) Fabric 4-way Bridge for HPE (R9B39A) every 4 GPUS.
If you choose the H100 GPU you will need to install (1) Power Cable Kit for every NVIDIA H100 GPU (P60567-B21).
For every pair of NVIDIA PCIe GPUs, you must select (3) Ampere 2-way 2-slot Bridge for HPE R6V66A.
XL645d Modular GPU Rules
Select (1) Modular Accelerator Power Cable Kit (P31489-B21) per Accelerator Tray.
You also have the choice of Air cooled or Liquid Cooled SXM4 GPUs.
HPE Apollo 6500 Gen10 PLUS Cooling
15 - 80mm dual rotor hot pluggable chassis fans. These systems also have support for Air or Liquid cooled parts like the GPU and the CPU. There is no redundancy for the fans they all come standard in the 6500 Chassis.
HPE Apollo 6500 Gen10 PLUS Flex Slot your power!
With the choice of a Modular or PCI Accelerator tray, each tray needs a different PSU setup for power usage. For Air-cooled A100 Modular Trays, you have two power tray options. Same if you have installed the Ml100 or A100 GPUs. For all other GPU support, you can simply choose any of the three single PSU options.
HPE Apollo 4510 Gen10 Lights Out! iLO
HPE iLO 5 ASIC. The rear of the chassis has 4 1GB RJ-45 ports.
HPE iLO with Intelligent Provisioning (standard) with Optional: iLO Advance and OneView
HPE iLO Advanced licenses offer smart remote functionality without compromise, for all HPE ProLiant servers. The license includes the full integrated remote console, virtual keyboard, video, and mouse (KVM), multi-user collaboration, console record and replay, and GUI-based and scripted virtual media and virtual folders. You can also activate the enhanced security and power management functionality.