Machine learning methods in general and Deep Neural Networks in particular
have shown to be vulnerable to adversarial perturbations. So far this
phenomenon has mainly been studied in the context of whole-image
classification. In this contribution, we analyse how adversarial perturbations
can affect the task of semantic segmentation. We show how existing adversarial
attackers can be transferred to this task and that it is possible to create
imperceptible adversarial perturbations that lead a deep network to misclassify
almost all pixels of a chosen class while leaving network prediction nearly
unchanged outside this class.