CTO Servers
CTO Storage
HPE
Dell
NetApp
EMC
EqualLogic
Cisco

Current Configuration

(List below serves as quick links to each section)

HPE Chassis

HPE Processor

HPE Memory

GPU Module

GPU SXM2

GPU PCIe

GPU Topologies

Drive Options

HPE SFF Drives SAS (2.5in)

HPE SFF Drives SATA (2.5in)

HPE NVMe Drives

PCI Cards

Networking Adapters

FlexibleLOM

Host Bus Adapters (HBAs)

Storage Controller

InfiniBand Adapters

iLO Server Management

Rail Kit

Power Supply Unit (PSU)

Select Warranty

The request has been sent.
Thank you.

Price as Configured:
$0

Add to Cart

Error Submitting Cart. Please get in touch with us for support.

Please select all the mandatory items in the configuration list before submitting the cart.

One or more items in your cart has no price. This will be submitted as a "Request for a Quote".

HPE ProLiant Apollo 6500 Server (G10) CTO

HPE

| CTO Servers |

HPE-APOLLO-6500-G10-CTO

 

Multi-Quantity Discounts Available for ALL Line Items

** LARGE DISCOUNTS available for complete systems. Pricing displayed is for single item purchases **

Use the configuration tool to create your solution and submit it us for a quote OR send us a message with your complete build requirements to receive best pricing

READ ME

  • Some options may not be displayed until the compatible parent option is chosen ie. Chassis – Drives, Processor – RAM etc.
  • “Quote” items can be added to your cart and sent to us via the cart page
  • Click the blue bar to close/open a section from view after choosing your options

HPE Chassis

This Chassis has two GPU Modules that can be installed. Up to 8 SXM2 GPU or 8 PCIe GPUs.
Please choose the GPU Module Below. Mixing of GPU Types is not supported.

HPE Processor

Can Support up to 2 Processors.
Mixing of processors is not supported.

Gen1

Gen2

HPE Memory

Speed will be based upon the Processors you select.

GPU Module

One 1 can be installed per server.

GPU SXM2 (Optional)

Supports up to 8 SXM2 GPUS
Each GPU needs a HeatSink. 4 front 4 back.

GPU PCIe (Optional)

Supports up to 8 PCIe GPUs
The PCIe GPUs support Topologies which can be selected below.

GPU Topologies (Optional)

Only supported with PCI GPUs
Images of the GPU Topologies are above with some details of each below in the description.

Drive Options (Optional)

System supports 16 SFF Drives.
If the NVMe drive cage is installed you MUST install the NVMe Enablement kit.

HPE SFF Drives SAS (2.5in) (Optional)

If SAS Drives are installed you will need to install a Smart Array.
To save a PCIe slot this system supported FlexibleLOM Smart Arrays.

HPE SFF Drives SATA (2.5in) (Optional)

The S100i or the ACHI support SATA Drives.

HPE NVMe Drives (Optional)

The correct Drive cage must be installed to support NVMe Drives along with the NVMe Enablement kit.

PCI Cards (Optional)

The M.2 Riser sits in the primary slot and support 2 M.2 Drives that come with it.

Networking Adapters (Optional)

FlexibleLOM (Optional)

These do not take up a PCIe slot. Recommeneded configuration for expanding the Networking abilities of the server.

Host Bus Adapters (HBAs) (Optional)

Storage Controller (Optional)

InfiniBand Adapters (Optional)

iLO Server Management (Optional)

Rail Kit (Optional)

Power Supply Unit (PSU)

Select Warranty

Helpful Tip: Once desired configuration is selected click "Add to Cart".
From the cart page you can submit a submit a quote request for best pricing

-PLEASE CONTACT FOR CUSTOM CONFIGURATIONS. DISK SHELVES, COMPLETE CONFIGURATIONS & TURN KEY SYSTEMS AVAILABLE.

-SPARE PARTS AVAILABLE FOR EACH MODEL. PLEASE ASK FOR PRICING ON SPARE DISK, CONTROLLERS, POWER SUPPLIES, NICs, HBAs, SFPs, Cables, etc.

What's New

  • (1) XL270d Gen 10 Trays | P00392-B21
  • (2) Intel Xeon 1st or 2nd Gen Scalable Processors Per Tray
  • (24) 2933MT/s DDR4 DIMMs
  • (16) SFF Drives
  • Up to (4) NVMe Drives Supported
  • (1) PCIe Slots Per XL270d Tray
  • Embedded 4 x 1GbE Network Adapter
  • Support for FlexibleLOM Adapters
  • Support for Type-A Smart Array Controllers
  • Supports (2) GPU Module Options (SMX2 or PCIe)
  • (8) GPUs Supported Per GPU Module
  • *(12) T4 GPU can be supported (PCIe GPU Module only)
  • (4) PCIe Slots Per GPU Module
  • Support for NVLink with the latest NVIDIA GPU
  • HPE iLO 5

Other Versions
  • HPE ProLiant Apollo 2000 G9 | G10
  • HPE ProLiant Apollo 4200 G9 | G10 | G10+
  • HPE ProLiant Apollo 4500 G9 | ---
  • HPE ProLiant Apollo 6500 G9 | G10 | G10+
  • HPE ProLiant Apollo pc40
  • HPE ProLiant Apollo sx40

(Click to View)

Ideal for:

  • Artificial Intelligence (AI)
  • Big Data Analytics
  • Data Analytics
  • Deep Learning (DL)
  • High-Performance Computing (HPC)
  • Language Model
  • Machine Learning (ML)

The HPE Apollo 6500 Gen10 Server is specifically designed for HPC, AI, Deep Learning (DL), and Machine Learning (ML) applications. It features the robust 6500 chassis, which integrates the XL270d Gen10 system board with a GPU module stacked on top, ensuring optimal performance for intensive computational workloads. The server supports up to 16 drives on its 2-socket, 24-DIMM system tray, allowing for extensive storage and memory capacity. The Apollo Power Distribution Unit (PDU) houses the midplanes, efficiently connecting the entire system to deliver seamless integration and exceptional power management for demanding workloads.


CTO Configuration Support

Need help with the configuration? Contact us today!

Chassis & Tray Options
HPE ProLiant XL270d Gen 10 Tray (P00392-B21)

The HPE Apollo 6500 Gen10 Server's system tray, positioned beneath the GPU module, is a powerhouse of performance and flexibility. It supports two CPUs and 24 DIMMs, offering extensive computational and memory capacity. The XL270d system board includes a dedicated FlexibleLOM for customizable networking, internal M.2 drives for high-speed boot or cache storage, a Type-A Smart Array controller, and one PCIe slot for expansion. By default, the system features three SATA ports, enabling support for up to 12 SATA drives across two drive bays. For networking, it integrates four embedded 1GbE RJ-45 ports, ensuring robust connectivity for high-performance workloads.

HPE Apollo 6500 Gen10 Tray - P00392-B21
Maximum Internal Storage
Drive Capacity Configuration
Hot Plug SFF SAS SSD 244 TB 16 x 15.3 TB
Hot Plug SFF SATA SSD 122 TB 16 x 7.68 TB
Hot Plug SFF NVMe SSD 30.7 TB NVMe 4 x 7.68 TB
Notes: 2x m.2 drives are supported

HPE Apollo 6500 Gen10 M.2 Drives

The default S100i Controller uses 14 embedded SATA lanes, but only 12 lanes on 3 ports are accessible as 2 lanes are leveraged to support the 2 M.2 options on the primary riser. That primary riser is the HPE Apollo PCIe/SATA M.2 FIO Riser Kit (863661-B22).

HPE Apollo 6500 Gen10 M.2 Riser Card

PCIe Slot Information
System Board
Slot # Technology Bus Width Connector Width Slot Form Factor Notes
21 PCIe 3.0 X16 X16 Full-height, half-length slot Proc 2
HPE GPU Modules

The Manuals from HPE are lacking tremendously with GPU information. Please take your time when reading the manuals. Below is a summary of what we have found.

HPE Apollo 6500 Gen10 PCIe GPUHPE 8 GPU PCIe
v1 (P03032-B22) or v2 (P13153-B22)

Only (1) version is supported at a time

Supports up to (8) GPUs

NOTE* Supportes up to (12) Tesla T4

Up to 4 high speed fabric adapters

Four topologies with NVLink

4:1 or 8:1 topology (PCI GPU only)

HPE SXM2 6500 gen 10 apollo 8 SXM2 GPU
(P03032-B22) - (P03727-001)

Supports up to (4 or 8) SXM2 GPUs

Each Heatsink supports 4 NVLink SXM-2 GPU

Mixing GPUs is not supported

Up to (4) PCIe slots

The SXM2 Section from the Quick Specs manual has been removed. Not sure why but most this information is from the older 2018 manauals.

SXM-2 GPU Module
Slot # Technology Bus Width Connector Width Slot Form Factor Notes
11 PCIe 3.0 X16 X16 Full-height, half-length slot Proc 1
12 PCIe 3.0 X16 X16 Full-height, half-length slot Proc 1
9 PCIe 3.0 X16 X16 Full-height, half-length slot Proc 2
10 PCIe 3.0 X16 X16 Full-height, half-length slot Proc 2
PCIe GPU Module
Slot # Technology Bus Width Connector Width Slot Form Factor Notes
11 PCIe 3.0 X16 X16 Full-height, half-length slot Dependent on topology selected in BIOS. See User and Administrator Guide for full details
12 PCIe 3.0 X16 X16 Full-height, half-length slot
9 PCIe 3.0 X16 X16 Full-height, half-length slot
10 PCIe 3.0 X16 X16 Full-height, half-length slot

Please view the manual for all the topology information.

Storage Controllers

The HPE Apollo 6500 Gen10 Server comes with three SATA ports, which support up to 12 SATA drives across two drive bays. To use SAS drives, a Smart Array Adapter is required. HPE recommends installing the Type-A Adapter for this purpose, as it conserves the server's PCIe slot for other expansions, such as NVMe drives.

HPE Apollo 6500 Gen10 Type-A Storage Controllers
NVMe Drives

For NVMe storage, an NVMe Enablement Kit must be installed. This kit occupies the PCIe slot on the system tray and connects to the NVMe port on the tray. Additionally, the NVMe Midplane needs to be installed to provide the data path to the backplanes. The server supports a maximum of four NVMe drives, offering high-speed storage capabilities ideal for performance-intensive workloads.

HPE Apollo 6500 Gen10 NVMe Enablement Kit


Welcome to the Request Express Quoting Form!

This streamlined form is designed to simplify the quoting process for you. Whether you're uncertain about specific parts or seeking a hassle-free experience, you're in the right place.
Provide us with the necessary details, and we'll generate a customized quote for your server. Even if you're unsure about the exact components you need, we've got you covered. Our team will tailor a solution based on the information you provide.

Let's get started!
Simply fill out the form, and we'll take care of the rest.


Enter your name... *
Enter your email address... *
Message *
Preferred Brand
Form Factor

Rack Unit of Height (#U)

Total Amount of Memory
Total Amount of Storage

Storage Type

RAID


List Networking Connection needed:
Rack Space Restrictions

Which OS do you plan on using: