














(List below serves as quick links to each section)
-PLEASE CONTACT FOR CUSTOM CONFIGURATIONS. DISK SHELVES, COMPLETE CONFIGURATIONS & TURN KEY SYSTEMS AVAILABLE.
-SPARE PARTS AVAILABLE FOR EACH MODEL. PLEASE ASK FOR PRICING ON SPARE DISK, CONTROLLERS, POWER SUPPLIES, NICs, HBAs, SFPs, Cables, etc.
The HPE Apollo 6500 Gen10 Server is specifically designed for HPC, AI, Deep Learning (DL), and Machine Learning (ML) applications. It features the robust 6500 chassis, which integrates the XL270d Gen10 system board with a GPU module stacked on top, ensuring optimal performance for intensive computational workloads. The server supports up to 16 drives on its 2-socket, 24-DIMM system tray, allowing for extensive storage and memory capacity. The Apollo Power Distribution Unit (PDU) houses the midplanes, efficiently connecting the entire system to deliver seamless integration and exceptional power management for demanding workloads.
Need help with the configuration? Contact us today!
FrontUp to (16) Drives with support for (4) NVMe drives. The Apollo 6500 holds (4) PSUs that connect to the PDU in the center of the Chassis.
MidThe PDU holds the (2) Midplanes. The Processor/GPU Midplane and the NVMe Midplane.
The Midplanes enable the system board to interact with all the components in the server.
RearThe XL270d Gen 10 system Tray sits below the GPU Module which holds the 8 GPUS. Both of these connect to the PDU Midplane.
The HPE Apollo 6500 Gen10 Server's system tray, positioned beneath the GPU module, is a powerhouse of performance and flexibility. It supports two CPUs and 24 DIMMs, offering extensive computational and memory capacity. The XL270d system board includes a dedicated FlexibleLOM for customizable networking, internal M.2 drives for high-speed boot or cache storage, a Type-A Smart Array controller, and one PCIe slot for expansion. By default, the system features three SATA ports, enabling support for up to 12 SATA drives across two drive bays. For networking, it integrates four embedded 1GbE RJ-45 ports, ensuring robust connectivity for high-performance workloads.

| Drive | Capacity | Configuration | |
|---|---|---|---|
| Hot Plug SFF SAS SSD | 244 TB | 16 x 15.3 TB | |
| Hot Plug SFF SATA SSD | 122 TB | 16 x 7.68 TB | |
| Hot Plug SFF NVMe SSD | 30.7 TB | NVMe 4 x 7.68 TB | |
| Notes: 2x m.2 drives are supported | |||
The default S100i Controller uses 14 embedded SATA lanes, but only 12 lanes on 3 ports are accessible as 2 lanes are leveraged to support the 2 M.2 options on the primary riser. That primary riser is the HPE Apollo PCIe/SATA M.2 FIO Riser Kit (863661-B22).

| System Board | ||||||
| Slot # | Technology | Bus Width | Connector Width | Slot Form Factor | Notes | |
|---|---|---|---|---|---|---|
| 21 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | Proc 2 | |
The Manuals from HPE are lacking tremendously with GPU information. Please take your time when reading the manuals. Below is a summary of what we have found.
HPE 8 GPU PCIeOnly (1) version is supported at a time
Supports up to (8) GPUs
NOTE* Supportes up to (12) Tesla T4
Up to 4 high speed fabric adapters
Four topologies with NVLink
4:1 or 8:1 topology (PCI GPU only)
8 SXM2 GPU Supports up to (4 or 8) SXM2 GPUs
Each Heatsink supports 4 NVLink SXM-2 GPU
Mixing GPUs is not supported
Up to (4) PCIe slots
The SXM2 Section from the Quick Specs manual has been removed. Not sure why but most this information is from the older 2018 manauals.
| Slot # | Technology | Bus Width | Connector Width | Slot Form Factor | Notes |
|---|---|---|---|---|---|
| 11 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | Proc 1 |
| 12 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | Proc 1 |
| 9 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | Proc 2 |
| 10 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | Proc 2 |
| Slot # | Technology | Bus Width | Connector Width | Slot Form Factor | Notes |
|---|---|---|---|---|---|
| 11 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | Dependent on topology selected in BIOS. See User and Administrator Guide for full details |
| 12 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | |
| 9 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot | |
| 10 | PCIe 3.0 | X16 | X16 | Full-height, half-length slot |
Please view the manual for all the topology information.
The HPE Apollo 6500 Gen10 Server comes with three SATA ports, which support up to 12 SATA drives across two drive bays. To use SAS drives, a Smart Array Adapter is required. HPE recommends installing the Type-A Adapter for this purpose, as it conserves the server's PCIe slot for other expansions, such as NVMe drives.

For NVMe storage, an NVMe Enablement Kit must be installed. This kit occupies the PCIe slot on the system tray and connects to the NVMe port on the tray. Additionally, the NVMe Midplane needs to be installed to provide the data path to the backplanes. The server supports a maximum of four NVMe drives, offering high-speed storage capabilities ideal for performance-intensive workloads.

(List below serves as quick links to each section)