Deep Neural Networks (DNNs) have tremendous potential in advancing the vision
for self-driving cars. However, the security of DNN models in this context
leads to major safety implications and needs to be better understood. We
consider the case study of steering angle prediction from camera images, using
the dataset from the 2014 Udacity challenge. We demonstrate for the first time
adversarial testing-time attacks for this application for both classification
and regression settings. We show that minor modifications to the camera image
(an L2 distance of 0.82 for one of the considered models) result in
mis-classification of an image to any class of attacker's choice. Furthermore,
our regression attack results in a significant increase in Mean Square Error
(MSE) by a factor of 69 in the worst case.