AIセキュリティポータル K Program
Regularization for Adversarial Robust Learning
Share
Abstract
Despite the growing prevalence of artificial neural networks in real-world applications, their vulnerability to adversarial attacks remains a significant concern, which motivates us to investigate the robustness of machine learning models. While various heuristics aim to optimize the distributionally robust risk using the $\infty$-Wasserstein metric, such a notion of robustness frequently encounters computation intractability. To tackle the computational challenge, we develop a novel approach to adversarial training that integrates $\phi$-divergence regularization into the distributionally robust risk function. This regularization brings a notable improvement in computation compared with the original formulation. We develop stochastic gradient methods with biased oracles to solve this problem efficiently, achieving the near-optimal sample complexity. Moreover, we establish its regularization effects and demonstrate it is asymptotic equivalence to a regularized empirical risk minimization framework, by considering various scaling regimes of the regularization parameter and robustness level. These regimes yield gradient norm regularization, variance regularization, or a smoothed gradient norm regularization that interpolates between these extremes. We numerically validate our proposed method in supervised learning, reinforcement learning, and contextual learning and showcase its state-of-the-art performance against various adversarial attacks.
Improved generalization bounds for robust learning
Idan Attias, Aryeh Kontorovich, Yishay Mansour
Published: 2019
Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks
Pranjal Awasthi, Natalie Frank, Mehryar Mohri
Published: 2020.4.29
Regularization for wasserstein distributionally robust optimization
Azizian W, Iutzeler F, Malick J
Published: 2023
Spectrally-normalized margin bounds for neural networks
Bartlett, P.L., Foster, D.J., Telgarsky, M.J.
Published: 2017
Data-driven stochastic programming using phi-divergences
Bayraksan G, Love DK
Published: 2015
Robust solutions of optimization problems affected by uncertain probabilities
Ben-Tal A, den Hertog D, De Waegenaere A, Melenberg B, Rennen G
Published: 2013
Penalty functions and duality in stochastic programming via φ-divergence functionals
Ben-Tal A, Teboulle M
Published: 1987
From predictive to prescriptive analytics
Bertsimas D, Kallus N
Published: 2020
Persistence in discrete optimization under data uncertainty
Bertsimas D, Natarajan K, Teo CP
Published: 2006
Share