Adds • 2 PetaFLOPS AI performance INT8, 250 TeraFLOPS FP32; 125 TeraFLOPS FP64

Find a Distributor

The OSS-VOLTA16 is an HGX-2 platform with unprecedented compute power, bandwidth, and memory topology to train massive models, analyze datasets, and solve simulations faster and more efficiently than previously possible in a single server. The 16 Tesla V100 GPUs work as a single unified 2-petaFLOP accelerator with half a terabyte (TB) of total GPU memory, allowing it to handle the most computationally intensive workloads. GPU management and monitoring and software are preinstalled on the OSS-VOLTA16. The GPU accelerated server also includes dual high-performance Intel Scalable Architecture processors with a base configuration of 512GB of DDR4 memory scalable to 3TB. Sixteen PCIe Gen3 slots are available for expansion and scale out OSS-GPUltima clusters using IB or high speed Ethernet networking. The appliance also includes sixteen 2.5” front-removable NVMe drive bays for large model storage.

HARDWARE HIGHLIGHTS

  • 10U System Chassis
  • Dual Intel Xeon Scalable Architecture CPUs
  • Up to 3TB DDR4 System Memory
  • Sixteen 2.5” NVMe SSDs and six 2.5” SAS drives
  • Sixteen Volta GPU SXM3 with 300GB/s NVLink
  • Sixteen x16 PCIe 3.0 network slots
  • Two x16 PCIe 3.0 expansion slots
  • Six 3000W redundant Power Supplies
  • GPU Management and Monitoring pre-installed
  • NVIDIA optimized frameworks pre-installed

ADVANTAGES

  • All GPUs capable of peer-to-peer direct access to all other GPUs’ memory as well as direct transfer operations via NVLink at high Bandwidth
  • High performance for collective communications
  • PCIe bandwidth fully available for host and/or NIC communication during inter-GPU communication