These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The perturbation analysis of linear solvers applied to systems arising
broadly in machine learning settings -- for instance, when using linear
regression models -- establishes an important perspective when reframing these
analyses through the lens of a data poisoning attack. By analyzing solvers'
responses to such attacks, this work aims to contribute to the development of
more robust linear solvers and provide insights into poisoning attacks on
linear solvers. In particular, we investigate how the errors in the input data
will affect the fitting error and accuracy of the solution from a linear
system-solving algorithm under perturbations common in adversarial attacks. We
propose data perturbation through two distinct knowledge levels, developing a
poisoning optimization and studying two methods of perturbation: Label-guided
Perturbation (LP) and Unconditioning Perturbation (UP). Existing works mainly
focus on deriving the worst-case perturbation bound from a theoretical
perspective, and the analysis is often limited to specific kinds of linear
system solvers. Under the circumstance that the data is intentionally perturbed
-- as is the case with data poisoning -- we seek to understand how different
kinds of solvers react to these perturbations, identifying those algorithms
most impacted by different types of adversarial attacks.