TOP Literature Database Evaluating Adversarial Robustness: A Comparison Of FGSM, Carlini-Wagner Attacks, And The Role of Distillation as Defense Mechanism
arxiv
Evaluating Adversarial Robustness: A Comparison Of FGSM, Carlini-Wagner Attacks, And The Role of Distillation as Defense Mechanism
AI Security Portal bot
Information in the literature database is collected automatically.
These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
This technical report delves into an in-depth exploration of adversarial
attacks specifically targeted at Deep Neural Networks (DNNs) utilized for image
classification. The study also investigates defense mechanisms aimed at
bolstering the robustness of machine learning models. The research focuses on
comprehending the ramifications of two prominent attack methodologies: the Fast
Gradient Sign Method (FGSM) and the Carlini-Wagner (CW) approach. These attacks
are examined concerning three pre-trained image classifiers: Resnext50_32x4d,
DenseNet-201, and VGG-19, utilizing the Tiny-ImageNet dataset. Furthermore, the
study proposes the robustness of defensive distillation as a defense mechanism
to counter FGSM and CW attacks. This defense mechanism is evaluated using the
CIFAR-10 dataset, where CNN models, specifically resnet101 and Resnext50_32x4d,
serve as the teacher and student models, respectively. The proposed defensive
distillation model exhibits effectiveness in thwarting attacks such as FGSM.
However, it is noted to remain susceptible to more sophisticated techniques
like the CW attack. The document presents a meticulous validation of the
proposed scheme. It provides detailed and comprehensive results, elucidating
the efficacy and limitations of the defense mechanisms employed. Through
rigorous experimentation and analysis, the study offers insights into the
dynamics of adversarial attacks on DNNs, as well as the effectiveness of
defensive strategies in mitigating their impact.