Charting NVIDIA’s path to autonomous automotive leadership
Major barriers to self-driving car adoption include:
• Getting vehicles to travel safely on roads without lane markings.
• Allowing automobiles to drive on roads where markings are obscured by rain, snow or glare.
• Ensuring vehicles can still move safely when human drivers behave unpredictably around them.
These core problems are beyond what can be done with algorithms and a few cameras, and this is where NVIDIA has been emerging as a key leader. It isn’t possible to predict every situation a car will be in and write custom code to dictate how a car will act. Covering those contingencies is impossible.
However, it may be plausible to teach self-driving cars how to behave on the road, allowing them to respond with a degree of intelligence when their cameras and sensors report back data that they aren’t sure what to do with. NVIDIA has been driving this innovation, and it all began with neural networks used in supercomputing environments.
Neural networks at the foundation of self-driving advances
NVIDIA has been revolutionizing the supercomputing sector with GPU technologies for the past few years, and much of this innovation came to a head in 2015, when the organization released a landmark whitepaper highlighting just how much GPUs can impact neural network performance.
Up to that point, GPUs had been recognized for their ability to train neural networks. The neural network training process involves providing the system with examples of data inputs combined with the desired outcome based on that input. GPUs have been established as ideal in this process because they offer pure speed and energy efficiency advantages that CPU-focused architectures can’t match.
What hadn’t been proven, up to that point, was that GPUs can also handle inference. The inference process in neural networks involves feeding machines unexpected data and challenging them to use the pre-existing parameters set during the training phase to process those inputs. NVIDIA tests its Titan X GPU next to the Xeon E5 CPU, a 15-core processing unit. It found that the GPU operated with:
• Between 3.6- and 4.4-times higher energy efficiency levels than the CPU.
• Performance ranging from 5.3 to 6.7 percent greater than the CPU.
Similar gains were also achieved using the Tegra X1 GPU architecture, as it was able to process 259 images per second at a rate of 45 images per second per watt. Intel’s Core i7 6700k, on the other hand, could process just 242 images per second with 3.9 images per second per watt. This experiment clearly established GPUs as the essential technology for machine learning neural networks, and NVIDIA has progressed to applying that technology to self-driving cars.
Taking neural networks beyond inference
Establishing GPUs critical components of neural network inferences was a major step forward, and NVIDIA took that to another level when it began experimenting with convolutional neural networks. Imagine a convoluted idea – something that is complex and layered enough to be difficult to understand – and apply that to an image. From there, imagine a machine capable of analyzing that image and understanding to respond.
In a 2016 whitepaper, NVIDIA explained that it had advanced neural network training to such a degree that it could establish a convolutional neural network that maps actual pixels from a camera. In this experiment, the company used a single front-facing camera. From there, the network could process the pixels and understand what is happening in the image with minimal training, allowing the camera to feed steering commands to the vehicle. This holistic analysis allowed the system to drive on local roads or highways, in traffic, whether lane markers were present or not. The system could even handle parking lots and unpaved roads.
This innovation was possible because the convoluted neural network could use pixel data to recognize representations of the processing steps it is trained to take, allowing it to identify road features even when markings are not present. All of this is possible with less human training than other methods and fewer processing steps, allowing for a smaller neural network that is more applicable to real-world settings.
Creating an end-to-end self-driving platform
All this work on neural networks, in terms of training, inference and convolutional image recognition, has allowed NVIDIA to build out a fully self-driving car platform that empowers automobile manufacturers to take autonomous automobiles to another level.
The DRIVE PX2 system brings together data from cameras, radar, lidar, and even ultrasonic sensor systems to establish a three-dimensional view of the environment around the car. From there, deep neural networks process images in real-time and classify objects to allow for accurate data recognition. These capabilities take form in critical functions including:
• Mapping tools that offer end-to-end visualization of roads that provide detailed, HD images of roadways and their surroundings for automakers, map companies, and other industry stakeholders.
• A dedicated software development kit with reference applications and library modules that integrate with all data within the driving system.
• A unified artificial intelligence platform that can train neural networks in the data center in just a few days – a process that used to take months – and leave those systems ready for deployment in vehicles.
As of Q4 2016, this work from NVIDIA fueled the deployment of the DRIVE PX2 for AUTO CRUISE system – a solution for automated highway driving and HD mapping. An AUTOCHAUFFEUR configuration for point-to-point travel and a fully autonomous driving platform are on their way.
NVIDIA and the IoT future
Many IoT systems emerge as businesses or industries work to solve specific problems. High accident and fatality frequencies due to human error have been pushing automakers to advance self-driving cars in recent years. NVIDIA’s efforts in GPU-based neural networking have evolved in response, turning the GPU manufacturer into one of the leaders in the self-driving car sector.
Leave a Reply
You must be logged in to post a comment.