Can manual labelling protect us from hacks on driverless cars?

Chawin Sitawarin and a team of researchers at Princeton University recently published a study that sent ripples around the world of AI. The paper dealt with potential hacks on driverless cars as its central theme. It posed the question: What would happen if a malicious agent were to cause a car to misinterpret a road... Read more »

Chawin Sitawarin and a team of researchers at Princeton University recently published a study that sent ripples around the world of AI. The paper dealt with potential hacks on driverless cars as its central theme. It posed the question: What would happen if a malicious agent were to cause a car to misinterpret a road sign and behave inappropriately. At first glance that may not sound too serious, but as soon as you imagine a car speeding down the highway being led to interpret a speed limit of 120 km/h to be 30 km/h, it quickly becomes clear how dire the problem could become.

Sitawarin et al examined these attacks, which they call DARTS (Deceiving Autonomous Cars with Toxic Signs), through a series of virtual and real-life tests. In both instances they developed toxic signs, installed them in the environment, captured videos using a vehicle- mounted camera and processed them using a sign recognition pipeline. Disturbingly, they found they could deceive the cars in over 90 percent of both the digital and real-world settings.

Such a high number of course begs the question: How can cars be so vulnerable and are there ways to make systems more robust?

from: “DARTS: Deceiving Autonomous Cars with Toxic Signs”

Developing cyber security guards

Conceptualizing how to fix the problem requires a baseline understanding of the technology behind autonomous vehicles. Generally these cars are equipped with various sensors, such as cameras, as well as LiDAR (Light Imaging Detection and Ranging), which monitor road conditions and avoid collisions.

Archer Software, an international outsourcing company active in the fields of virtual security, has proposed a few software-based solutions. For one, they propose a redundancy system- having multiple LiDAR sensors active at any one time. If different LiDAR wavelengths – which do not overlap – are used then this may reduce an attacker’s chance of success given that it is harder and more expensive to attack multiple signals at the same time.

Other proposed methods focus on the way signals are received. For instance, introducing random probing of signals it thought to make it more difficult for a hacker to synchronize with signals if they are inconsistent. Making the period of probing vary randomly is hoped to make it more difficult to predict the next signal period. Similarly shortening the range of the LiDAR to 100 meters, for instance, would decrease the pulse period, providing a narrower window within which a potential hacker could attack.

Though many of these approaches hold potential, they all rely on intervening in the signal sending or receiving moments. For added security, it is necessary for developers to think on the interpretation level as well.

 

from: Robust Physical-World Attacks on Deep Learning Models

Getting the original signal right

It often surprises people how much manual labor is required to get AI off the ground. In order for computers to be able to recognize particular objects, the help of human eyes (in the form of “the crowd”) is required up front. From that perspective, AI is only as good as the training data that has been provided, which in turn requires human beings. For this reason the cost of creating and annotating maps for cities, for instance, can amount to billions of dollars in the US alone.

Though labour intensive, the annotation process is quite simple. A human being is presented with a data set, a short video or image, and is tasked with drawing and labelling boxes around the respective road elements: e.g. cars, road sign, pedestrians etc. It takes an army of people to develop a comprehensive data set which is why a single hour driven can take up to 800 human hours to label.

Although that number can seem high, it must be considered in relation to the output. Every hour is well invested if it provides a foundation that deep learning can build on. In this regard, companies like Drive have begun to use deep-learning enhanced automation for annotating data. The system is comprised of a small group of human annotators who train for brand new scenarios or validate the annotation that the system has done independently. “There are some scenarios where our deep-learning system is working very well,” Sameep Tandon, CEO of Drive told spectrum magazine. “So we have a team of human annotators do the first iteration, and we iteratively improve the deep-learning system.”

from: DARTS: Deceiving Autonomous Cars with Toxic Signs

Conclusion

As problems go, this one is particularly tricky: The stakes are high: Just a single hack has the potential to cost many lives on our roads. At the same time the problem is complex and it will require a solution that integrates software and hardware elements. At the foundation of all of this will be a focus on improving the training of our AI systems. Sitawarin and the Princeton scholars mentioned above suggest countermeasure based on “adversarial training” which in the short term will require intense input from humans before deep learning can take measures further.