Deep neural networks (DNNs) are found to be vulnerable against adversarial
examples, which are carefully crafted inputs with a small magnitude of
perturbation aiming to induce arbitrarily incorrect predictions. Recent studies
show that adversarial examples can pose a threat to real-world
security-critical applications: a "physical adversarial Stop Sign" can be
synthesized such that the autonomous driving cars will misrecognize it as
others (e.g., a speed limit sign). However, these image-space adversarial
examples cannot easily alter 3D scans of widely equipped LiDAR or radar on
autonomous vehicles. In this paper, we reveal the potential vulnerabilities of
LiDAR-based autonomous driving detection systems, by proposing an optimization
based approach LiDAR-Adv to generate adversarial objects that can evade the
LiDAR-based detection system under various conditions. We first show the
vulnerabilities using a blackbox evolution-based algorithm, and then explore
how much a strong adversary can do, using our gradient-based approach
LiDAR-Adv. We test the generated adversarial objects on the Baidu Apollo
autonomous driving platform and show that such physical systems are indeed
vulnerable to the proposed attacks. We also 3D-print our adversarial objects
and perform physical experiments to illustrate that such vulnerability exists
in the real world. Please find more visualizations and results on the anonymous
website: https://sites.google.com/view/lidar-adv.