Written by Steven Woo
Introduction
According to the National Highway Traffic Safety Administration (NHTSA), various driver assistance technologies are already helping to save lives and prevent injuries. Specific examples include helping drivers avoid making unsafe lane changes, warning of other vehicles when backing up, or automatically braking when a vehicle ahead stops or slows abruptly. The above-mentioned safety technology leverages a wide range of hardware – including sensors, cameras and radar – to help vehicles identify safety risks and warn drivers to avoid crashes and collisions.
Self-Driving Memory Requirements
However, many of these systems are using DRAM-based memory solutions with relatively modest bandwidths. This is because most advanced driver-assistance systems (ADAS) are rated Level 2 and only offer drivers partial automation capabilities. Future self-driving cars (Level 3 – Level 5) will require new generations of memory with significantly increased bandwidths. This additional bandwidth will enable autonomous vehicles to rapidly execute massive calculations and safely implement real-time decisions on roads and highways, with the ultimate goal of doing this without any driver assistance.
Automotive systems in 2020 models are slated to be equipped with x32 LPDRAM components at I/O signaling speeds up to 4266Mb/s. However, Micron estimates that advanced driver assistance systems (ADAS) applications will demand 512GB/s – 1024 GB/s bandwidth to support Level 3 and 4 autonomous driving capabilities. According to Micron, GDDR6, with a data transfer rate of 16 Gbps, is a “fundamental technology” that provides the essential memory bandwidth to fuel on-board AI compute engines (deep neural nets) and drive autonomous vehicles. At Level 5, or full autonomy, vehicles will be capable of leveraging sophisticated machine learning algorithms to independently react to a dynamic environment by fully recognizing traffic signs and stoplights, as well as accurately predicting the actions of cars, trucks and pedestrians.
It should be noted that a number of companies are already designing and marketing dedicated silicon for the nascent self-driving market. Nvidia, for example, launched its AGX self-driving compute platforms in 2018. These platforms are built around the company’s Xavier SoC, a processor specifically made for autonomous driving. The SoC incorporates six different types of processors to run redundant and diverse algorithms for AI, sensor processing, mapping and driving. With Xavier, DRIVE AGX platforms can process data from camera, lidar, radar and ultrasonic sensors to comprehend and navigate a complex 360-degree environment in real-time.
Deep Neural Networks & The Limits of Automotive Autonomy
However important, hardware represents only one side of the autonomous vehicle equation. It is also critical to understand the current limitations of deep neural networks (DNNs) and deep learning models as the self-driving space evolves. For example, a recently published academic paper titled “DeepTest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars” notes that DNNs have successfully enabled the development of safety-critical machine learning (ML) systems for autonomous vehicles. However, the paper warns that DNNs are still susceptible to incorrect and unexpected cornercase behaviors that have led to collisions. As the paper’s authors point out, existing mechanisms for detecting erroneous behavior are heavily contingent upon the manual collection of labeled test data or ad hoc, unguided simulation. Because autonomous vehicles adapt their behavior based on sensor input, the space of possible inputs is quite significant.
As such, it is unlikely that unguided simulations alone would be robust enough to positively identify the full range of erroneous behaviors involved in autonomous navigation. DeepTest – a potential solution proposed in the paper – addresses this limitation by maximizing the neuron coverage of a DNN using synthetic test images that are generated by applying different realistic transformations on a set of seed images. Put succinctly, this approach uses domain-specific metamorphic relations to identify erroneous behaviors of the DNN without detailed specification.
In addition to the above-mentioned issue surrounding cornercase behaviors, another recent paper discusses the concept of deliberate physical-world attacks against deep learning models. Specifically, researchers managed to fool two road sign classifiers by using a perturbation in the shape of black and white stickers to attack a stop sign. This caused targeted misclassification in 100% of the images obtained in controlled lab settings and above 84% of the captured video frames obtained on a moving vehicle.
Conclusion
As the NHTSA states, the dream of fully autonomous cars and trucks that drive us instead of us driving them will one day be a reality. Although the self-driving space is still very much a work in progress, a number of companies are designing a new generation of dedicated silicon that require high bandwidth capabilities. Concurrently, deep neural networks and deep learning models continue to evolve as researchers grapple with and address various algorithmic limitations and vulnerabilities.
Leave a Reply