





(List below serves as quick links to each section)
-PLEASE CONTACT FOR CUSTOM CONFIGURATIONS. DISK SHELVES, COMPLETE CONFIGURATIONS & TURN KEY SYSTEMS AVAILABLE.
-SPARE PARTS AVAILABLE FOR EACH MODEL. PLEASE ASK FOR PRICING ON SPARE DISK, CONTROLLERS, POWER SUPPLIES, NICs, HBAs, SFPs, Cables, etc.
Building on HPE ProLiant as the intelligent foundation for hybrid cloud, the refurbished HPE ProLiant DL385 Gen10 Plus server offers the 2nd generation AMD® EPYC™ 7000 Series processor delivering up to 2X [1] the performance of the prior generation. With up to 128 cores (per 2-socket configuration), 32 DIMMs for memory up to 3200 MHz, the HPE ProLiant DL385 Gen10 Plus server delivers low cost virtual machines (VMs) with unprecedented security. Equipped with PCIe Gen4 capabilities, HPE ProLiant DL385 Gen10 Plus offers improved data transfer rates and higher networking speeds.
Combined with a better balance of processor cores, memory and I/O makes the HPE ProLiant DL385 Gen10 Plus the ideal choice for virtualization, and memory-intensive and HPC workloads.
The HPE DL385 Gen10 Plus Rack Server is very similar to the DL385 Gen10 Rack Server. This system only supports the AMD EYPC Gen2 7000 Series. The HPE Rack server can support up to 2 Processors, mixing of processors is not allowed. This powerful 2U 2P Rack server supports 32 DDR4 DIMM on a 2-socket configuration. Mixing of x4 & x8 memory is not allowed. The Dl385 Gen10 Plus has 3 PCIe Risers for a total of 8 PCIe slots to use, which allows you to easily configure this server to your needs.
This Rack server has many storage upgrades for each chassis with limitations on some. The DL385 Gen10 Plus can support an extra front drive in the Media Bay slot, Internal drives that sit above the CPU and DIMMs, and finally rear drives that sit in the PCIe riser cages. We will go over the limitations here, for a full detailed understanding of all the limitations and requirements I suggest you read the spec manual below.
These servers do not come with an embeddedLOM, so the Choice of OCP or stand-up card is mandatory for network capabilities.
All CTO servers are Energy Star 3.0 compliant.HPE DL385 Gen10 Plus Maximum Internal Storage
| CTO Server [Chassis] | HPE ProLiant DL385 Gen10 Plus 8 LFF CTO Server | HPE ProLiant DL385 Gen10 12 LFF CTO Server | HPE ProLiant DL385 Gen10 8 SFF CTO Server | HPE ProLiant DL385 Gen10 24 SFF CTO Server | |
|---|---|---|---|---|---|
| SKU Number | P14278-B21 | P14280-B21 | P14281-B21 | P14279-B21 | |
| Processor | Not included as standard | ||||
| DIMM Slots | 32-DIMM slots | ||||
| Storage Controller | Choice of HPE modular Smart Array and PCIe plug-in controller | ||||
| PCIe | Three standard in primary riser, upto Eight slots with 2 processors | ||||
| Drive Cage - included | 8 LFF | 12 LFF | 8 SFF | 24 SFF | |
| Network Controller | Choice of OCP or stand up card | ||||
| Fans | 4-Standard | 6-Performance | 6-Standard | 6-Performance | |
| Management | HPE iLO with Intelligent Provisioning (standard), iLO Advanced and OneView (optional) | ||||
| USB | 1x 3.0 standard plus iLo front service port | 1x 3.0 standard plus iLo front service port | 1x 3.0 standard plus iLo front service port | 1x 3.0 standard plus iLo front service port | |
| CTO Server | 8 SFF | 24 SFF | 8 LFF | 12 LFF |
|---|---|---|---|---|
| Included Drive Cage | 8 SFF SAS/SATA | 3X 8 SFF SAS/SATA | 8 LFF + UMB | 12 LFF Chassis |
| Universal Media Bay | 1 Optional | Not available | 1 Included | Not available |
| ODD | 1 Optional with UMB | Not available | 1 Optional | Not available |
| 8 SFF Drive Cage | Up to 2 Optional | Not available | Not available | Not available |
| 8 SFF SAS/SATA/NVMe (Mid- plane) | Not available | Not available | Not available | Not available |
| 8 NVMe/SAS Bay | Up to 2 Optional | Not available | Not available | Not available |
| 8 NVMe Cage | Up to 2 Optional | Not available | Not available | Not available |
| 2 SFF SAS/SATA (Front) | 1 Optional with UMB | Not available | 1 Optional | Not available |
| 2 SFF SAS/SATA (Rear) | 1 Optional | 1 Optional | 1 Optional | 1 Optional |
| 2 NVMe (FRONT) | 1 Optional with UMB | Not available | 1 Optional | Not available |
| 4 LFF Mid-plane | Not available | Not available | 1 Optional | 1 Optional |
| 4 LFF Rear | Not available | Not available | 1 Optional | 1 Optional |
The DL385 Gen10 Plus server has the support for a Type-A Array RAID on Chip (AROC) Smart Array also known as Flexible Storage Controllers. By Installing an AROC Card you will be saving a PCIe slot. The AROC spot on the system board also supports NVME drives with the AROC NVMe Adapter. The DL385 Gen10 Plus server also supports Smart Array PCI Stand-up Cards. If you want to install 24 drives, you will need the 12Gb SAS Expander that sits on the Primary Riser in Slot 3.
You have the ability to load the OS from two different boot devices. The HPE NS204i-p x2 Lanes NVMe PCIe3 x8 OS Boot Device (P12965-B21) or with a simple USB drive (P21868-B21). NS204i-p will only be supported in Slot 1 of the primary riser and Slot 8 of the tertiary riser. HPE Universal SATA AIC HHHL M.2 SSD Kit (878783-B21) and NS204i-p cannot be selected together. When NS204i-p and doublewide GPU are selected together Secondary Riser MUST be selected and the doublewide GPU can be a maximum of 2. SATA M.2 cannot be supported in LFF/SFF chassis with mid-tray configuration due to thermal concern
There are way too many PCIe risers to talk about each one. Below is a table showing you some information. To view more information about each riser, please click the view item button for all the limitations and requirements for each riser. If you want a detailed description of each riser please read the spec manual at the bottom of the page.
| Part Number | Description | Riser Position | Bus Width (Gen3 Lanes) | NVMe Direct Connect | |||||
|---|---|---|---|---|---|---|---|---|---|
| Primary | Secondary | Tertiary | Top Slot | Middle Slot | Bottom Slot | Ports | Drive Count | ||
| N/A | This is the default riser in the chassis | D | N | N | x8 | x16 | x8 | - | - |
| P14587-B21 | HPE DL38X Gen10 Plus x8/x16/x8 Secondary Riser Kit | N | O | N | x8 | x16 | x8 | - | - |
| P14592-B21 | HPE DL38X Gen10 Plus x16x16 Slot1/2 Riser FIO Kit | O | N | N | x16 | x16 | 0 | - | - |
| P14589-B21 | HPE DL38X Gen10 Plus x16/x16 Slot1/2 Secondary Riser Kit | N | O | N | x16 | x16 | 0 | - | - |
| P14599-B21 | HPE DL38X Gen10 Plus Primary x16 x16 Slot2/3 Riser FIO Kit | O | N | N | 0 | x16 | x16 | - | - |
| P14590-B21 | HPE DL38X Gen10 Plus x16/x16 Slot2/3 Secondary Riser Kit | N | O | N | 0 | x16 | x16 | - | - |
| P14600-B21 | HPE DL38X Gen10 Plus Slot1 x16 Adder for Slot2/3 Riser | O | O | N | x16 | 0 | 0 | - | - |
| P14581-B21 | HPE DL38X Gen10 Plus x8x8 2x16 Tertiary Riser Kit | N | N | O | x8 | x8 | 0 | - | - |
| P14588-B21 | HPE DL38X Gen10 Plus x16 Tertiary Riser Kit | N | N | O | x16 | 0 | 0 | - | - |
| P14575-B21 | HPE DL38X Gen10 Plus Primary/Secondary NEBS- compliant Riser Kit | O | O | N | 0 | 0 | 0 |
- | - |
| P14577-B21 | HPE DL38X Gen10 Plus Tertiary NEBS- compliant Riser Kit | N | N | O | 0 | 0 | 0 | - | - |
| P14505-B21 | HPE DL385 Gen10 Plus 2SFF NVMe/SAS Primary/Secondary Smart Carrier Riser Kit | 0 | 0 | N | 0 | 0 | 0 | - | - |
| P25902-B21 | HPE DL385 Gen10 Plus 2SFF NVMe/SAS Smart Carrier Secondary Riser Kit | N | O | N | 0 | 0 | 0 | 1 | 2 |
| P14579-B21 | HPE DL38X Gen10 Plus 2LFF Primary/Secondary Riser Kit | O | O | N | 0 | 0 | 0 | - | - |
| P25903-B21 | HPE DL38X Gen10 Plus 2LFF Secondary Riser Kit | N | O | N | 0 | 0 | 0 | - | - |
| P14580-B21 | HPE DL38X Gen10 Plus 2LFF Tertiary Riser Kit | N | N | O | 0 | 0 | 0 | - | - |
GPUs and Workload Accelerators will greatly speed up the computing speed for HPC or virtualization. The GPUs and Mid-tray drives can’t be installed together. You will also need to make sure you have the correct Riser installed to support the GPU along with any other requirements for the specific GPUs you install. When NS204i-p and doublewide GPU are selected together Secondary Riser MUST be selected and doublewide GPU can be a maximum of 2
| Part Number | Card |
QTY Per Server | CPU Supported |
|---|---|---|---|
| R0W29C | HPE NVIDIA Tesla T4 16GB Computational Accelerator | 8 | 240W or below |
| R4D73C | NVIDIA Tesla V100S 32GB Computational Accelerator | 3 | |
| R1F95C | HPE NVIDIA Quadro RTX 4000 Graphics Accelerator | 6 | |
| R0Z45C | HPE NVIDIA Quadro RTX 6000 Graphics Accelerator | 3 | |
| R1F97C | HPE NVIDIA Quadro RTX 8000 Graphics Accelerator | 3 | |
| R6B53C | NVIDIA A100 40GB GPU Module for HPE | 3 | |
| R4B02C | Xilinx Alveo U50 Accelerator for HPE | 8 | |
| R4B03C | Xilinx Alveo U250 Accelerator for HPE | 3 |
Without an embeddedLOM on the system board, the HPE DL385 Gen10 Plus needs either an OCP 3.0 or a Stand-Up Network Adapter card installed for Network Capabilities. By installing the OCP 3.0 you will be saving a PCIe slot without any issues or limitations. If the OCP 3.0 card installed needs x16 lanes to support, please install the cable (P14318-001) to connect to CPU1 to upgrade and configure to one x16 OCP NIC. This rack server supports InfiniBand Adapters up 1P 200GB connectors.
With a ton of options, there are multiple reasons for the system to need to be cooled. 12 LFF or more drives require a Max performance fan kit (P14608-B21). If a CPU power is >180W then it is required to use a 2U Performance Heat sink. A max performance fan kit is required for rear drives or NVMe SFF configurations. SATA M.2 cannot be supported in LFF/SFF chassis with mid-tray configuration due to thermal concerns. If you have any concerns as to what devices have certain requirements. View the product for notes or read the spec manual below.
All power supplies in a server should match. Mixing Power Supplies is not supported. HPE Flexible Slot (Flex Slot) Power Supplies share a common electrical and physical design that allows for , tool-less installation into HPE ProLiant Gen10 Plus Performance Servers. Flex Slot power supplies are certified for high-efficiency operation and offer multiple power output options, allowing users to "right-size" a power supply for specific server configurations. This flexibility helps to reduce power waste, lower overall energy costs, and avoid "trapped" power capacity in the data center.
The DL38X servers have these NVMe Balanced Support Bundles. What these bundles do is provide all the parts and cabling specs for an NVMe configuration. All the parts for each bundle are in the CTO which can be configured and set up for you. If you would like to know more about the Balanced NVMe Support Bundles please read the spec manual. The cabling of the Bundles is on the last pages of the PDF.
If you have any questions or concerns, please contact us for support.
Monitor your servers for ongoing management, service alerting, reporting, and remote management with HPE iLO.
Configure and boot your servers securely with industry-standard Unified Extensible Firmware Interface (UEFI).
iLO RESTful API is Redfish API conformance and offers simplified server management automation such as configuration and maintenance tasks based on modern industry standards..
Hassle-free server and OS provisioning for 1 or few servers with Intelligent Provisioning.
(List below serves as quick links to each section)