CTO Servers
CTO Storage
HPE
Dell
NetApp
EMC
EqualLogic
Cisco
SKU: 699-21001-0230-600

Nvidia A100 80GB x16 PCI-e 300W DW FH/HL GPU | 699-21001-0230-600

Quick Specs

  • Model: A100
  • GPU Memory: 80 GB HBM2e
  • Memory ECC: Y
  • Memory Bandwidth: 1935 GB/sec
  • Max Power Consumption: 300W
  • Graphic Bus / System Interface: PCIe Gen4x16/ NVLink bridge
  • Interconnect Bandwidth: 64 GB/sec (PCIe 4.0)
  • Slot Width: DW
  • GPU Height / Length: FHFL
  • Auxiliary Cable: CPU 8 pin
  • Workload: HPC/AI/Database Analytics

NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform. A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands. The A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.



Welcome to the Request Express Quoting Form!

This streamlined form is designed to simplify the quoting process for you. Whether you're uncertain about specific parts or seeking a hassle-free experience, you're in the right place.
Provide us with the necessary details, and we'll generate a customized quote for your server. Even if you're unsure about the exact components you need, we've got you covered. Our team will tailor a solution based on the information you provide.

Let's get started!
Simply fill out the form, and we'll take care of the rest.


Enter your name... *
Enter your email address... *
Message *
Preferred Brand
Form Factor

Rack Unit of Height (#U)

Total Amount of Memory
Total Amount of Storage

Storage Type

RAID


List Networking Connection needed:
Rack Space Restrictions

Which OS do you plan on using: