AIにより推定されたラベル
※ こちらのラベルはAIによって自動的に追加されました。そのため、正確でないことがあります。
詳細は文献データベースについてをご覧ください。
Abstract
As a certified defensive technique, randomized smoothing has received considerable attention due to its scalability to large datasets and neural networks. However, several important questions remain unanswered, such as (i) whether the Gaussian mechanism is an appropriate option for certifying ℓ2-norm robustness, and (ii) whether there is an appropriate randomized (smoothing) mechanism to certify ℓ∞-norm robustness. To shed light on these questions, we argue that the main difficulty is how to assess the appropriateness of each randomized mechanism. In this paper, we propose a generic framework that connects the existing frameworks in , to assess randomized mechanisms. Under our framework, for a randomized mechanism that can certify a certain extent of robustness, we define the magnitude of its required additive noise as the metric for assessing its appropriateness. We also prove lower bounds on this metric for the ℓ2-norm and ℓ∞-norm cases as the criteria for assessment. Based on our framework, we assess the Gaussian and Exponential mechanisms by comparing the magnitude of additive noise required by these mechanisms and the lower bounds (criteria). We first conclude that the Gaussian mechanism is indeed an appropriate option to certify ℓ2-norm robustness. Surprisingly, we show that the Gaussian mechanism is also an appropriate option for certifying ℓ∞-norm robustness, instead of the Exponential mechanism. Finally, we generalize our framework to ℓp-norm for any p ≥ 2. Our theoretical findings are verified by evaluations on CIFAR10 and ImageNet.