These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Randomized smoothing is a defensive technique to achieve enhanced robustness
against adversarial examples which are small input perturbations that degrade
the performance of neural network models. Conventional randomized smoothing
adds random noise with a fixed noise level for every input sample to smooth out
adversarial perturbations. This paper proposes a new variational framework that
uses a per-sample noise level suitable for each input by introducing a noise
level selector. Our experimental results demonstrate enhancement of empirical
robustness against adversarial attacks. We also provide and analyze the
certified robustness for our sample-wise smoothing method.