





(List below serves as quick links to each section)
-PLEASE CONTACT FOR CUSTOM CONFIGURATIONS. DISK SHELVES, COMPLETE CONFIGURATIONS & TURN KEY SYSTEMS AVAILABLE.
-SPARE PARTS AVAILABLE FOR EACH MODEL. PLEASE ASK FOR PRICING ON SPARE DISK, CONTROLLERS, POWER SUPPLIES, NICs, HBAs, SFPs, Cables, etc.
The HPE Apollo 6500 System provides the tools and the confidence to deliver high-performance computing (HPC) innovation. The system consists of three key elements: The HPE ProLiant XL270 Gen9 Accelerator tray, the HPE Apollo 6500 Chassis, and the HPE Apollo 6000 Power Shelf
Two HPE ProLiant XL270 Gen9 Accelerator tray are supported by the 6500 Apollo Server. This tray holds 2 CPUs, 16 DDR4 DIMM's, and up to 8 GPUs. This ProLiant XL270d Tray has 3 PCIe slots with one being dedicated to the recommended RAID controller P542D.
The HPE Apollo 6000 Power Shelf can support up (6) Hot-Swap PSUs in a 1U Rack shelf. Each power ShShelflef can support up to (2 to 4) 6500 Chassis depending on the configuration of the XL270d Tray.
4U Apollo 6500 Chassis | 845627-B21
(4) Fans per Tray Installed
(1) Required HPE Power Shelf
Supports up to (2) XL270d Trays
HPE Advanced Power Manager Module (APM)
1U Power Shelf | 735131-B21
Up to (6) 60mm PSU
Up to (4) Apollo 6500 Supported
Up to 15.9 kW of DC power
Supports single-phase and three-phase AC input.
Integrated with HPE Advanced Power Manager (APM)
2U Proliant XL270d | 845628-B21
(2) Intel Xeon E5-2600 v4
(16) 2400MT/s DDR4 DIMMs
(8) SFF Drives Per Tray
(2) PCIe Gen3
(8) GPUs Supported
Power Management Module
(4) Power Slots per Power Module with Single Enclosure
(1) Redundant & (3) Power Slots per Power Module with Dual Enclosure
System fans
(4) Fans must be installed per XL270d
Chassis Management Module
(slot 1) Connects the Power Shelf
(slot 2) APM 1.0
(slot 3) APM 2.0
(slot 4 p1) Only one iLO port can be connected to the network
(slot 4 p2) This iLO port is used only for daisy chaining
HPE Apollo 6500 Power Shelf
Chassis & Power Management Modules
With 1 enclosure all 4 PSU ports are filled. With a 2 Enclosure cabling 1 of the PSU are for redundancy. This is why only 3 cables are wired to the chassis from the Power Shelf.
Advanced Power Management (APM)
Connects with the HPE Apollo 6000 Power Shelf to control power distribution across enclosures and nodes.
It Tracks power consumption at the rack, enclosure, and chassis levels
Helps optimize power usage by setting limits on the maximum power consumption of the system, ensuring no component exceeds its power capacity
Apollo 6500 Daisy Chaining (CMM p4)
Port 4 on the CMM is specifically used for daisy chaining via iLO
Streamlines system administration by enabling unified control of multiple enclosures.
HPE XL270d Gen9 Service Guide
HPE XL270d Gen9 User Guide
The XL270d comes with an embedded Software RAID controller, the HPE Dynamic Smart Array B140i Controller (SATA Only). The embedded B140i will operate in UEFI mode only. For legacy support, AHCI mode is required.
The preferred RAID controller for SAS or SATA drives is the HPE Smart Array P542D (851508-B21). This Smart Array has a unique PCIe connection that has been designed into the XL270d for this RAID controller.
Hot Plug SFF SAS 7.2k 8TB 8x 1.0TB
Hot Plug SFF SAS 10k 14.4TB 8x1.8TB
Hot Plug SFF SAS 15k 4.8TB 8x600GB
Hot Plug SFF SATA 8TB 8x1TB
Hot Plug SFF SATA SSD 30.7TB 8x3.84TB
Hot Plug SFF SAS SSD 15.4TB 8x1.92TB
4:1 Module Riser Kit (850508-B21)
Supports 4 GPUs and 1 PCIe slot per CPU.
Supports NVIDIA Tesla K80
8:1 Module Riser Kit (850500-B21)
Supports 8 GPUs and 1 PCIe slot per CPU.
GPUDirect is not supported for more than 8 logical GPU per CPU, and as the K80 contains two logical GPU per card, the 8:1 configuration will not support GPUDirect.
For workloads that have a great deal of GPU to GPU communications, such as Deep Learning, the “HPE Peer to Peer GPU Mode FIO Kit” (782400-B21) is recommended. Most traditional HPC workloads have little GPU to GPU communications and the “HPE High Performance GPU Mode FIO Kit” (782402-B21) configuration will optimize bandwidth back to the CPU and system main memory.
Both riser options support either GPU Mode.
(List below serves as quick links to each section)