These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
We propose a general approach to evaluating the performance of robust
estimators based on adversarial losses under misspecified models. We first show
that adversarial risk is equivalent to the risk induced by a distributional
adversarial attack under certain smoothness conditions. This ensures that the
adversarial training procedure is well-defined. To evaluate the generalization
performance of the adversarial estimator, we study the adversarial excess risk.
Our proposed analysis method includes investigations on both generalization
error and approximation error. We then establish non-asymptotic upper bounds
for the adversarial excess risk associated with Lipschitz loss functions. In
addition, we apply our general results to adversarial training for
classification and regression problems. For the quadratic loss in nonparametric
regression, we show that the adversarial excess risk bound can be improved over
those for a general loss.