These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In the era of increasing concerns over cybersecurity threats, defending
against backdoor attacks is paramount in ensuring the integrity and reliability
of machine learning models. However, many existing approaches require
substantial amounts of data for effective mitigation, posing significant
challenges in practical deployment. To address this, we propose a novel
approach to counter backdoor attacks by treating their mitigation as an
unlearning task. We tackle this challenge through a targeted model pruning
strategy, leveraging unlearning loss gradients to identify and eliminate
backdoor elements within the model. Built on solid theoretical insights, our
approach offers simplicity and effectiveness, rendering it well-suited for
scenarios with limited data availability. Our methodology includes formulating
a suitable unlearning loss and devising a model-pruning technique tailored for
convolutional neural networks. Comprehensive evaluations demonstrate the
efficacy of our proposed approach compared to state-of-the-art approaches,
particularly in realistic data settings.
External Datasets
CIFAR-10
German Traffic Sign Recognition Benchmark (GTSRB)
References
Proceedings of the IEEE International Conference on Image Processing (ICIP)
A new backdoor attack in CNNs by training set corruption without label poisoning
Mauro Barni, Kassem Kallas, Benedetta Tondi
Published: 2019
Advances in Neural Information Processing Systems
Effective backdoor defense by exploiting sensitivity of poisoned samples