These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Overparameterized models with millions of parameters have been hugely
successful. In this work, we ask: can the need for large models be, at least in
part, due to the \emph{computational} limitations of the learner? Additionally,
we ask, is this situation exacerbated for \emph{robust} learning? We show that
this indeed could be the case. We show learning tasks for which computationally
bounded learners need \emph{significantly more} model parameters than what
information-theoretic learners need. Furthermore, we show that even more model
parameters could be necessary for robust learning. In particular, for
computationally bounded learners, we extend the recent result of Bubeck and
Sellke [NeurIPS'2021] which shows that robust models might need more
parameters, to the computational regime and show that bounded learners could
provably need an even larger number of parameters. Then, we address the
following related question: can we hope to remedy the situation for robust
computationally bounded learning by restricting \emph{adversaries} to also be
computationally bounded for sake of obtaining models with fewer parameters?
Here again, we show that this could be possible. Specifically, building on the
work of Garg, Jha, Mahloujifar, and Mahmoody [ALT'2020], we demonstrate a
learning task that can be learned efficiently and robustly against a
computationally bounded attacker, while to be robust against an
information-theoretic attacker requires the learner to utilize significantly
more parameters.