Geospatial Intelligence (GeoInt) applications used by the military to create real time mapping of the battlefield require high compute acceleration to provide necessary data quickly. Today these calculations are performed with specialized software running on GPU cards, coprocessors, or FPGA cards.  The military gathers vast amounts of information from a variety of sources that needs to be manipulated to generate the 2D and 3D mapping required by field operations. GPU cards, with thousands of cores each, offload the number crunching and image processing from the CPUs.

GPUs are typically added to a server but the amount of data that can be manipulated is dependent on the number of GPUs the computer can support. Current computers provide 7 slots but only one or two generally have enough bandwidth to fully support the latest GPUs. In most cases, the more GPUs available to manipulate data, the faster the data reaches the analyst. The most advanced computers hold multiple GPUs for this purpose but GPUs require a lot of power and cooling and most computers are not equipped to accommodate more than one or two GPU cards.

Multiple GPUs can be added to any computer by expanding the PCI Express (PCIe) bus from the computer to a separate enclosure that houses multiple boards. These enclosures are connected to one or more servers through PCIe Gen3 x16 cables with a theoretical bandwidth of 128Gb/s. Connecting to the computers PCIe bus through PCIe eliminates the necessity for any software conversion back to the root complex, tremendously reducing  latency and cost. The One Stop Systems (OSS) High Density Compute Accelerator (HDCA) accommodates up to sixteen GPUs in a 3U chassis. All boards have sufficient bandwidth and there is ample redundant power and cooling available. These enclosures support up to 16 NVIDIA GPU cards or Intel Phi coprocessors.

The 3U HDCA is a modular system that is easy to install with only three basic parts: rack mountable chassis, four canisters, and three power supplies. Once the chassis shell is installed in the rack, the canisters and power supplies are slid into place from the front. Four PCIe connections are available at the rear of the chassis to support up to four host servers. One server can operate all 16 GPUs, two servers can operate 8 GPUs, and four servers can operate 4 GPUs each. The system automatically selects the number of servers attached and maps the GPU to the appropriate server connections. 

GPUs are used in numerous defense and intelligence operations today and the number is rapidly growing. The need to get the huge amounts of data transcribed and made useful through data and image processing is becoming overwhelming. The more GPUs available, the quicker the data can be used. GPU appliances supporting multiple NVIDIA GPUs and Intel Phi coprocessors are quickly becoming the best and most economical way of accomplishing this tremendous feat.