Most of the data manipulation attacks on deep neural networks (DNNs) during
the training stage introduce a perceptible noise that can be catered by
preprocessing during inference or can be identified during the validation
phase. Therefore, data poisoning attacks during inference (e.g., adversarial
attacks) are becoming more popular. However, many of them do not consider the
imperceptibility factor in their optimization algorithms, and can be detected
by correlation and structural similarity analysis, or noticeable (e.g., by
humans) in a multi-level security system. Moreover, the majority of the
inference attack relies on some knowledge about the training dataset. In this
paper, we propose a novel methodology which automatically generates
imperceptible attack images by using the back-propagation algorithm on
pre-trained DNNs, without requiring any information about the training dataset
(i.e., completely training data-unaware). We present a case study on traffic
sign detection using the VGGNet trained on the German Traffic Sign Recognition
Benchmarks dataset in an autonomous driving use case. Our results demonstrate
that the generated attack images successfully perform misclassification while
remaining imperceptible in both "subjective" and "objective" quality tests.