With up to four PCI-SIG PCIe Cable 3.0 compliant links to the host server up to 100m away, the SCA8000 supports a flexible upgrade path for new and existing datacenters with the power of NVLink without upgrading server infrastructure. With advanced, independent IPMI system monitoring and full featured SNMP interface not available in any other GPU accelerator with NVLink, the SCA8000 fits seamlessly into any size datacenter.Learn More
The GPUltima-CI is a power-optimized rack that can be configured with up to 32 dual Intel Xeon Scalable Architecture compute nodes, 64 network adapters, 48 NVIDIA® Volta™ GPUs, and 32 NVMe drives on a 128Gb PCIe switched fabric, and can support tens of thousands of composable server configurations per rack. Using one or many racks, the OSS solution contains the necessary resources to compose any combination of GPU, NIC and storage resources as may be required in today’s mixed workload data center.
The GPUltima is 14.3 petaflops of computational clusters in a rack that contains up to eight nodes with each node containing up to 16 accelerators and one or two dual-socket servers. Customers can build up to the full rack, one node at a time, depending on their application requirements. Multiple nodes provide different performance capabilities, dependent on the number accelerators employed. All NVIDIA accelerators exchange data through Infiniband 100Gb transfers using GPU Direct RDMA. All GPUs are connected to a root complex through up to four 128Gb PCI Express connections. Root complexes are connected to the GPUs through Infiniband and each other and the outside world through Ethernet.Learn More
The 4UV supports 2 PCIe 3.0 x16 host connections to 10 PCIe 3.0 x16 slots with two 1m cables to the host server. The system supports 8 double width GPUs or 10 single slot GPUs or add-in boards. Two fan choices allow for high-power GPU cooling up to 300W per GPU or a set-and-forget manual speed control using PWM fans when lower power GPUs or add-in cards are used. Two 2000W power supplies provide up to 4000W of usable power to the GPU accelerator system.
The 4UV supports 2 PCIe 3.0 x16 host connections to 5 PCIe 3.0 x16 slots with two 1m cables to the host server. The system supports 8 double width GPUs or 2 single slot GPUs or add-in boards. Two fan choices allow for high-power GPU cooling up to 300W per GPU or a set-and-forget manual speed control using PWM fans when lower power GPUs or add-in cards are used. Two 2000W power supplies provide up to 4000W of usable power to the GPU accelerator system.Learn More
Pre-installed with sixteen Titan V GPUs and connects to the host server through PCIe x16 Gen3 connection.WITH NVIDIA TITAN V
The OSS-VOLTA4 and OSS-VOLTA8 are purpose-built for deep learning applications with fully integrated hardware and software. The OSS-VOLTA 8 is a 896 TeraFLOP (Tensor Performance) engine with 80GB/s NVLink for the largest deep learning models. The OSS-VOLTA4 provides 62.8 TeraFLOPS (Tensor Performance) of double precision performance with 80GB/s GPU peer-to-peer NVLink. These systems are tuned for out-of-the-box operation and quick and easy deployment.Learn More
ExpressBox 3600 provides the optimum features for supporting world class High Performance Computing (HPC) applications that demand ‘mission critical’ features such as N+1 redundant power and cooling and high speed PCIe Gen 3 x16 connectivity. The EB3600 is ideal for running applications on multiple GPUs.
OSS GPU Accelerators add hundreds or thousands of cores to existing servers. GPU appliances and expansion systems come in various densities with anywhere from one to 128 GPUs. GPU appliances and expansion systems are purpose-built for HPC applications. Learn more about the GPUs that have been tested with OSS GPU Accelerators.Learn More