These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Adversarial robustness is a critical challenge in deploying deep neural
networks for real-world applications. While adversarial training is a widely
recognized defense strategy, most existing studies focus on balanced datasets,
overlooking the prevalence of long-tailed distributions in real-world data,
which significantly complicates robustness. This paper provides a comprehensive
analysis of adversarial training under long-tailed distributions and identifies
limitations in the current state-of-the-art method, AT-BSL, in achieving robust
performance under such conditions. To address these challenges, we propose a
novel training framework, TAET, which integrates an initial stabilization phase
followed by a stratified equalization adversarial training phase. Additionally,
prior work on long-tailed robustness has largely ignored the crucial evaluation
metric of balanced accuracy. To bridge this gap, we introduce the concept of
balanced robustness, a comprehensive metric tailored for assessing robustness
under long-tailed distributions. Extensive experiments demonstrate that our
method surpasses existing advanced defenses, achieving significant improvements
in both memory and computational efficiency. This work represents a substantial
advancement in addressing robustness challenges in real-world applications. Our
code is available at:
https://github.com/BuhuiOK/TAET-Two-Stage-Adversarial-Equalization-Training-on-Long-Tailed-Distributions.