Labels Predicted by AI
Please note that these labels were automatically added by AI. Therefore, they may not be entirely accurate.
For more details, please see the About the Literature Database page.
Abstract
As a certified defensive technique, randomized smoothing has received considerable attention due to its scalability to large datasets and neural networks. However, several important questions remain unanswered, such as (i) whether the Gaussian mechanism is an appropriate option for certifying ℓ2-norm robustness, and (ii) whether there is an appropriate randomized (smoothing) mechanism to certify ℓ∞-norm robustness. To shed light on these questions, we argue that the main difficulty is how to assess the appropriateness of each randomized mechanism. In this paper, we propose a generic framework that connects the existing frameworks in , to assess randomized mechanisms. Under our framework, for a randomized mechanism that can certify a certain extent of robustness, we define the magnitude of its required additive noise as the metric for assessing its appropriateness. We also prove lower bounds on this metric for the ℓ2-norm and ℓ∞-norm cases as the criteria for assessment. Based on our framework, we assess the Gaussian and Exponential mechanisms by comparing the magnitude of additive noise required by these mechanisms and the lower bounds (criteria). We first conclude that the Gaussian mechanism is indeed an appropriate option to certify ℓ2-norm robustness. Surprisingly, we show that the Gaussian mechanism is also an appropriate option for certifying ℓ∞-norm robustness, instead of the Exponential mechanism. Finally, we generalize our framework to ℓp-norm for any p ≥ 2. Our theoretical findings are verified by evaluations on CIFAR10 and ImageNet.