A wide variety of machine learning applications use GPUs, including deep learning, image recognition, autonomous cars, real-time voice translation and more. Machine learning is not new, but the increase in available data and more powerful GPUs allow for faster and more efficient parallel computing. The processes involved in machine learning used to take a year to complete; now with GPUs, the same processes only take mere weeks or days. GPU appliances supporting multiple NVIDIA GPUs will allow machine learning to go even further.
Deep learning is a branch of machine learning that attempts to train computers to identify patterns and objects, in the same way humans do. For example, Google Brain, a cluster of 16,000 computers, successfully trained itself to recognize a cat based on images taken from YouTube videos. This technology is already used in speech recognition, photo searches on Google+ and video recommendations on YouTube.
Training the neural networks used in deep learning is an ideal task for GPUs because GPUs can perform many calculations at once (parallel calculations), meaning the training will take much less time than it used to take. More GPUs means more computational power so if a system has multiple GPUs, it can compute data much faster than a system with CPUs only, or a system with a CPU and a single GPU. One Stop Systems newest product, the GPUltima is well suited for applications such as deep learning and image recognition. The GPUltima is 14.3 petaflops of computational clusters in a rack that contains up to eight nodes with each node containing up to 16 accelerators with each accelerator containing 2 GPUs and one or two dual-socket servers. Customers can build up to the full rack, one node at a time, depending on their application requirements. The full rack houses 256 networked GPUs using the latest PCIe and Infiniband technologies. This level of density provides deep learning applications with incredible compute power and allow further advancements in the field.
Image recognition is an application of machine learning that finds and identifies objects in images and videos. Humans can recognize objects with ease, but computer vision systems aren't quite up to the challenge yet; however, they are improving rapidly thanks to GPUs and deep learning techniques. Instead of just recognizing indiviudal objects, there is software that can describe an entire scene in a picture. In December of 2015 researchers from Microsoft used GPUs to achieve record results on ImageNET using a 152 layer neural network. The amount of processing power needed for such a deep neural network is immense but by using GPUs instead of CPUs, the amount of processing time can be greatly reduced.
One of One Stop Systems's newer products, the GPUltima is well suited for applications such as deep learning and image recognition. The GPUltima is 14.3 petaflops of computational clusters in a rack that contains up to eight nodes with each node containing up to 16 accelerators with each accelerator containing 2 GPUs and one or two dual-socket servers. Customers can build up to the full rack, one node at a time, depending on their application requirements. The full rack houses 256 networked GPUs using the latest PCIe and Infiniband technologies.
Autonomous cars are currently one of the most talked about future products and it's likely because the product is becoming more of a possibility and it's relevant to the masses. Elements of the technology are already used in cars on the road today and they range from adaptive cruise control, blind spot detection, pre-collision braking, lane departure warning, and even self-parking cards. But full automation is still a work in progress, one that many different automakers, technology firms and research institutions are dedicating resources to experimenting with driverless technology.
GPU technology is now being used to provide processing power for autonomous cars. NVIDIA's supercomputer Drive PX 2 will use deep learning to identify objects on the road that are picked up by the camera array on the car. Rather than programming a car to follow the rules of the road, an autonomous car that can learn will be able to adapt to situations on the road that don't follow the rules.
While movies like The Terminator where cyborgs overtake the human race still seem far-fetched, some science fiction movies that feature subjects such as artificial intelligence and computers that are able to learn don't seem quite as ludicrous anymore. Computers are starting to learn, albeit because they're being programmed to do so. Skype, a program with over 300 million users world-wide is an example of how deep learning can impact the average user to improve their life, or in this case, improve communication across continents.
October 2016, Microsoft announced that its real-time language translation tool would be built into the desktop version of Skype. While the technology has existed and been used over the past year, developers have been working tirelessly to make it better. It's not perfect yet, but it's getting better. As Jacob Demmit wrote in his article for GeekWire, "So Microsoft brought in a team of linguists to train the app to understand to nuances of speech, complete with slang. They installed a profanity filter and finally decided it has reached a point where it's ready for mainsteam users." The linguists were responsible for teaching the computer application to train the app to act more human-like. Real-time voice translation is just one example of what can be achieved by using deep learning technology and techniques.
Here are some machine learning applications that can be sped up by using GPUs:
The GPUltima is 14.3 petaflops of computational clusters in a rack that contains up to eight nodes with each node containing up to 16 accelerators and one or two dual-socket servers. Customers can build up to the full rack, one node at a time, depending on their application requirements. Multiple nodes provide different performance capabilities, dependent on the number accelerators employed. All NVIDIA accelerators exchange data through Infiniband 100Gb transfers using GPU Direct RDMA. All GPUs are connected to a root complex through up to four 128Gb PCI Express connections. Root complexes are connected to the GPUs through Infiniband and each other and the outside world through Ethernet.Learn More
Our compute accelerators support from one to sixteen double-wide PCIe cards and can be cabled up to four host computers through PCIe x16 Gen3 connections each operating at 128Gb/s. The all-steel construction chassis house power supplies, fans, and a system monitor that monitors the fans, temperature sensors and power voltages. Front panel LEDs signal minor, major or critical alarms. The compute accelerators are transparent and do not require software except for the drivers required by the PCIe add-in cards. Compute accelerators are the best appliance for applications that require a large amount of compute power.Learn More
With up to four PCI-SIG PCIe Cable 3.0 compliant links to the host server up to 100m away, the SCA8000 supports a flexible upgrade path for new and existing datacenters with the power of NVLink without upgrading server infrastructure. With advanced, independent IPMI system monitoring and full featured SNMP interface not available in any other GPU accelerator with NVLink, the SCA8000 fits seamlessly into any size datacenter.Learn More