Deep learning is a branch of machine learning that attempts to train computers to identify patterns and objects, in the same way humans do. For example, Google Brain, a cluster of 16,000 computers, successfully trained itself to recognize a cat based on images taken from YouTube videos. This technology is already used in speech recognition, photo searches on Google+ and video recommendations on YouTube.
Training the neural networks used in deep learning is an ideal task for GPUs because GPUs can perform many calculations at once (parallel calculations), meaning the training will take much less time than it used to take. More GPUs means more computational power so if a system has multiple GPUs, it can compute data much faster than a system with CPUs only, or a system with a CPU and a single GPU. One Stop Systems newest product, the GPUltima is well suited for applications such as deep learning and image recognition. The GPUltima is a petaflop computational cluster in a rack that contains up to eight nodes with each node containing up to 16 accelerators with each accelerator containing 2 GPUs and one or two dual-socket servers. Customers can build up to the full rack, one node at a time, depending on their application requirements. The full rack houses 256 networked GPUs using the latest PCIe and Infiniband technologies. This level of density can provide deep learning applications with incredible compute power and allow further advancements in the field.
Image recognition is an application of machine learning that finds and identifies objects in images and videos. Humans can recognize objects with ease but computer vision systems aren’t quite up to the challenge yet. However, they are improving rapidly thanks to GPUs and deep learning techniques. Instead of just recognizing individual objects, there is software that can describe an entire scene in a picture. In December of 2015 researchers from Microsoft used GPUs to achieve record results on ImageNET using a 152 layer neural network. The amount of processing power needed for such a deep neural network is immense but by using GPUs instead of CPUs, the amount of processing time can be greatly reduced.
One Stop Systems newest product, the GPUltima is well suited for applications such as deep learning and image recognition. The GPUltima is a petaflop computational cluster in a rack that contains up to eight nodes with each node containing up to 16 accelerators with each accelerator containing 2 GPUs and one or two dual-socket servers. Customers can build up to the full rack, one node at a time, depending on their application requirements. The full rack houses 256 networked GPUs using the latest PCIe and Infiniband technologies.
Autonomous cars are currently one of the most talked about future products and it’s likely because the product is becoming more of a possibility and it’s relevant to the masses. Elements of the technology are already used in cars on the road today and they range from adaptive cruise control, blind spot detection, pre-collision braking, lane departure warning, and even self-parking cars. But full automation is still a work in progress, one that many different automakers, technology firms and research institutions are dedicating resources to experimenting with driverless technology.
GPU technology is now being used to provide processing power for autonomous cars. NVIDIA’s supercomputer Drive PX 2 will use deep learning to identify objects on the road that are picked up by the camera array on the car. Rather than programming a car to follow the rules of the road, an autonomous car that can learn will be able to adapt to situations on the road that don’t follow the rules.
While movies like The Terminator where cyborgs overtake the human race still seem far-fetched, some science fiction movies that feature subjects such as artificial intelligence and computers that are able to learn don’t seem quite as ludicrous anymore. Computers are starting to learn, albeit because they’re being programmed to do so. Skype, a program with over 300 million users world-wide is an example of how deep learning can impact the average user to improve their life, or in this case, improve communication across continents.
In the beginning of October, Microsoft announced its real-time language translation tool would be built into the desktop version of Skype. While the technology has existed and been used over the past year, developers have been working tirelessly to make it better. It’s not perfect yet, but it’s getting better. As Jacob Demmit wrote in his article for GeekWire, “So Microsoft brought in a team of linguists to train the app to understand the nuances of speech, complete with slang. They installed a profanity filter and finally decided it has reached a point where it’s ready for mainstream users.” The linguists were responsible for teaching the computer application to train the app to act more human-like. Real-time voice translation is just one example of what can be achieved by using deep learning technology and techniques.
Here are some machine learning applications that can be sped up by using GPUs:
- Trakomatic OSense, OTrack