The GAS-Rugged (GAS-R) offers unprecedented compute density, performance, and flexibility in the first 10 PetaOPS† AI system for rugged edge computing environments. GAS-R features the world’s most advanced accelerator, the NVIDIA® A100 Tensor Core GPU, enabling AI on the Fly® customers to consolidate training, inference, and analytics into a unified, deployable AI solution at the edge. With available rack and flange mounting options, adaptable power subsystem and unheard-of performance in a modest 23” depth aluminum enclosure the GAS-R excels in demanding autonomous driving vehicle and airborne applications with true “Datacenter in the Sky” capability.
The 4U Pro provides optimized PCIe Gen 4 configurable expansion for edge HPC/AI applications at twice the performance of the previous generation PCIe Gen 3. The appliance supports up to 8 NVIDIA A100 PCIe GPUs which deliver 2.5x FP64 performance compared to the NVIDIA V100, with four PCIe Gen 4 x16 HBA/NIC slots for up to 256GB/s of sustained data throughput. Alternatively, the 4U Pro can be configured to provide 16 single-width PCIe Gen 4 x8 slots for FPGA data ingest or the latest storage addin cards.Learn More
The 4UV supports anywhere from 4 to 8 double-width cards or 5 to 10 single-width cards. There are several configurations available both in Gen 3 and Gen 4. PCIe cables connect the accelerator to a host server. Two fan choices allow for high-power GPU cooling up to 300W per GPU or a set-and-forget manual speed control using PWM fans when lower power GPUs or add-in cards are used. Two 2000W power supplies provide up to 4000W of usable power to the GPU accelerator system.
The EB4400 provides PCIe Gen 4 configurable expansion for ruggedized, transportable AI applications at twice the performance of the previous generation PCIe Gen 3. The appliance supports up to 4 NVIDIA A100 PCIe GPUs which deliver 2.5x FP64 performance compared to the NVIDIA V100, with two PCIe Gen 4 x16 HBA/NIC slots for up to 128GB/s of sustained data throughput.
With up to four PCI-SIG PCIe Cable 3.0 compliant links to the host server up to 100m away, the SCA8000 supports a flexible upgrade path for new and existing datacenters with the power of NVLink without upgrading server infrastructure. With advanced, independent IPMI system monitoring and full featured SNMP interface not available in any other GPU accelerator with NVLink, the SCA8000 fits seamlessly into any size datacenter.Learn More
The GPUltima-CI is a power-optimized rack that can be configured with up to 32 dual Intel Xeon Scalable Architecture compute nodes, 64 network adapters, 48 NVIDIA® Volta™ GPUs, and 32 NVMe drives on a 128Gb PCIe switched fabric, and can support tens of thousands of composable server configurations per rack. Using one or many racks, the OSS solution contains the necessary resources to compose any combination of GPU, NIC and storage resources as may be required in today’s mixed workload data center.
The GPUltima is 14.3 petaflops of computational clusters in a rack that contains up to eight nodes with each node containing up to 16 accelerators and one or two dual-socket servers. Customers can build up to the full rack, one node at a time, depending on their application requirements. Multiple nodes provide different performance capabilities, dependent on the number accelerators employed. All NVIDIA accelerators exchange data through Infiniband 100Gb transfers using GPU Direct RDMA. All GPUs are connected to a root complex through up to four 128Gb PCI Express connections. Root complexes are connected to the GPUs through Infiniband and each other and the outside world through Ethernet.Learn More
Pre-installed with sixteen Titan V GPUs and connects to the host server through PCIe x16 Gen3 connection.WITH NVIDIA TITAN V
ExpressBox 3600 provides the optimum features for supporting world class High Performance Computing (HPC) applications that demand ‘mission critical’ features such as N+1 redundant power and cooling and high speed PCIe Gen 3 x16 connectivity. The EB3600 is ideal for running applications on multiple GPUs.
The OSS-VOLTA4 and OSS-VOLTA8 are purpose-built for deep learning applications with fully integrated hardware and software. The OSS-VOLTA 8 is a 896 TeraFLOP (Tensor Performance) engine with 80GB/s NVLink for the largest deep learning models. The OSS-VOLTA4 provides 62.8 TeraFLOPS (Tensor Performance) of double precision performance with 80GB/s GPU peer-to-peer NVLink. These systems are tuned for out-of-the-box operation and quick and easy deployment.Learn More
OSS GPU Accelerators add hundreds or thousands of cores to existing servers. GPU appliances and expansion systems come in various densities with anywhere from one to 128 GPUs. GPU appliances and expansion systems are purpose-built for HPC applications. Learn more about the GPUs that have been tested with OSS GPU Accelerators.Learn More