These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Adversarial example attacks have emerged as a critical threat to machine
learning. Adversarial attacks in image classification abuse various, minor
modifications to the image that confuse the image classification neural network
-- while the image still remains recognizable to humans. One important domain
where the attacks have been applied is in the automotive setting with traffic
sign classification. Researchers have demonstrated that adding stickers,
shining light, or adding shadows are all different means to make machine
learning inference algorithms mis-classify the traffic signs. This can cause
potentially dangerous situations as a stop sign is recognized as a speed limit
sign causing vehicles to ignore it and potentially leading to accidents. To
address these attacks, this work focuses on enhancing defenses against such
adversarial attacks. This work shifts the advantage to the user by introducing
the idea of leveraging historical images and majority voting. While the
attacker modifies a traffic sign that is currently being processed by the
victim's machine learning inference, the victim can gain advantage by examining
past images of the same traffic sign. This work introduces the notion of ''time
traveling'' and uses historical Street View images accessible to anybody to
perform inference on different, past versions of the same traffic sign. In the
evaluation, the proposed defense has 100% effectiveness against latest
adversarial example attack on traffic sign classification algorithm.