These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In a backdoor attack, an adversary inserts maliciously constructed backdoor
examples into a training set to make the resulting model vulnerable to
manipulation. Defending against such attacks typically involves viewing these
inserted examples as outliers in the training set and using techniques from
robust statistics to detect and remove them.
In this work, we present a different approach to the backdoor attack problem.
Specifically, we show that without structural information about the training
data distribution, backdoor attacks are indistinguishable from
naturally-occurring features in the data--and thus impossible to "detect" in a
general sense. Then, guided by this observation, we revisit existing defenses
against backdoor attacks and characterize the (often latent) assumptions they
make and on which they depend. Finally, we explore an alternative perspective
on backdoor attacks: one that assumes these attacks correspond to the strongest
feature in the training data. Under this assumption (which we make formal) we
develop a new primitive for detecting backdoor attacks. Our primitive naturally
gives rise to a detection algorithm that comes with theoretical guarantees and
is effective in practice.