Run your most demanding scientific models on NVIDIA® Tesla® GPU Accelerators. Based on the NVIDIA Kepler™ Architecture, Tesla GPUs are designed to deliver faster, more efficient compute performance.

Find a Distributor

One Stop Systems offers a wide variety of products that can be integrated with NVIDIA GPUs. Our status as an NVIDIA Partner Network (NPN) Preferred OEM, acknowledges OSS’s achievement in accelerated computing, including technical expertise; customer service; and the ability to design, implement, and maintain best-in class accelerated computing solutions from NVIDIA. Call 877-438-2724 for more information.

WHY CHOOSE TESLA

DESIGNED FOR HPC PERFORMANCE AND RELIABILITY

The NVIDIA CUDA® parallel computing platform is enabled on GeForce®,Quadro®, and Tesla® products. Whereas GeForce and Quadro are designed for consumer graphics and professional visualization, respectively, the Tesla product family is designed ground-up for parallel computing and programming and offers exclusive High performance computing features.

HIGHEST PERFORMANCE FOR HPC APPLICATIONS

Tesla products are designed with exclusive features to maximize performance for supercomputing professionals:

Full double precision floating point performance

  • 1.43 TFlops on the Tesla K40 GPU
  • Higher double precision than consumer products

Faster PCIe communication

  • The only NVIDIA product with two DMA engines for bi-directional PCIe communication

Higher performance on technical applications with large data sets

  • Larger on-board memory (12 GB on the Tesla K40 GPU)

Faster communication with InfiniBand using NVIDIA GPUDirect™

  • Special Linux patch, InfiniBand driver, and CUDA driver

Higher performance CUDA driver for Windows OS

  • TCC driver reduces CUDA kernel overhead and enables Windows Remote Desktop and Windows Services

Learn more about exclusive software for Tesla products

Features Tesla K40 Tesla K20X Tesla K20 Tesla K10
Number and Type of GPU 1 Kepler GK110B 1 Kepler GK110 2 Kepler GK104s
Peak double precision floating
point performance
1.43 Tflops 1.31 Tflops 1.17 Tflops 0.19 Tflops
Peak single precision floating
point performance
4.29 Tflops 3.95 Tflops 3.52 Tflops 4.58 Tflops
Memory bandwidth (ECC off) 288 GB/sec 250 GB/sec 208 GB/sec 320 GB/sec
Memory size (GDDR5) 12 GB 6 GB 5 GB 8 GB
CUDA cores 2880 2688 2496 2 x 1536

* Note: Tesla K10 specifications are shown as aggregate of two GPUs. With ECC on, 6.25% of the GPU memory is used for ECC bits. For example, 6 GB total memory yields 5.25 GB of user available memory with ECC on.

GPUltima

The GPUltima is a petaflop computational cluster in a rack that contains up to eight nodes with each node containing up to 16 accelerators and one or two dual-socket servers. Customers can build up to the full rack, one node at a time, depending on their application requirements. Multiple nodes provide different performance capabilities, dependent on the number accelerators employed. All NVIDIA accelerators exchange data through Infiniband 100Gb transfers using GPU Direct RDMA. All GPUs are connected to a root complex through up to four 128Gb PCI Express connections. Root complexes are connected to the GPUs through Infiniband and each other and the outside world through Ethernet.

Learn More

Compute Accelerators

Our compute accelerators support from one to sixteen double-wide PCIe cards and can be cabled up to four host computers through PCIe x16 Gen3 connections each operating at 128Gb/s. The all-steel construction chassis house power supplies, fans, and a system monitor that monitors the fans, temperature sensors and power voltages. Front panel LEDs signal minor, major or critical alarms. The compute accelerators are transparent and do not require software except for the drivers required by the PCIe add-in cards. Compute accelerators are the best appliance for applications that require a large amount of compute power.

Learn More