These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
We study the error of linear regression in the face of adversarial attacks.
In this framework, an adversary changes the input to the regression model in
order to maximize the prediction error. We provide bounds on the prediction
error in the presence of an adversary as a function of the parameter norm and
the error in the absence of such an adversary. We show how these bounds make it
possible to study the adversarial error using analysis from non-adversarial
setups. The obtained results shed light on the robustness of overparameterized
linear models to adversarial attacks. Adding features might be either a source
of additional robustness or brittleness. On the one hand, we use asymptotic
results to illustrate how double-descent curves can be obtained for the
adversarial error. On the other hand, we derive conditions under which the
adversarial error can grow to infinity as more features are added, while at the
same time, the test error goes to zero. We show this behavior is caused by the
fact that the norm of the parameter vector grows with the number of features.
It is also established that $\ell_\infty$ and $\ell_2$-adversarial attacks
might behave fundamentally differently due to how the $\ell_1$ and
$\ell_2$-norms of random projections concentrate. We also show how our
reformulation allows for solving adversarial training as a convex optimization
problem. This fact is then exploited to establish similarities between
adversarial training and parameter-shrinking methods and to study how the
training might affect the robustness of the estimated models.