These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Real-life applications of deep neural networks are hindered by their unsteady
predictions when faced with noisy inputs and adversarial attacks. The certified
radius in this context is a crucial indicator of the robustness of models.
However how to design an efficient classifier with an associated certified
radius? Randomized smoothing provides a promising framework by relying on noise
injection into the inputs to obtain a smoothed and robust classifier. In this
paper, we first show that the variance introduced by the Monte-Carlo sampling
in the randomized smoothing procedure estimate closely interacts with two other
important properties of the classifier, \textit{i.e.} its Lipschitz constant
and margin. More precisely, our work emphasizes the dual impact of the
Lipschitz constant of the base classifier, on both the smoothed classifier and
the empirical variance. To increase the certified robust radius, we introduce a
different way to convert logits to probability vectors for the base classifier
to leverage the variance-margin trade-off. We leverage the use of Bernstein's
concentration inequality along with enhanced Lipschitz bounds for randomized
smoothing. Experimental results show a significant improvement in certified
accuracy compared to current state-of-the-art methods. Our novel certification
procedure allows us to use pre-trained models with randomized smoothing,
effectively improving the current certification radius in a zero-shot manner.