As a certified defensive technique, randomized smoothing has received
considerable attention due to its scalability to large datasets and neural
networks. However, several important questions remain unanswered, such as (i)
whether the Gaussian mechanism is an appropriate option for certifying
$\ell_2$-norm robustness, and (ii) whether there is an appropriate randomized
(smoothing) mechanism to certify $\ell_\infty$-norm robustness. To shed light
on these questions, we argue that the main difficulty is how to assess the
appropriateness of each randomized mechanism. In this paper, we propose a
generic framework that connects the existing frameworks in
\cite{lecuyer2018certified, li2019certified}, to assess randomized mechanisms.
Under our framework, for a randomized mechanism that can certify a certain
extent of robustness, we define the magnitude of its required additive noise as
the metric for assessing its appropriateness. We also prove lower bounds on
this metric for the $\ell_2$-norm and $\ell_\infty$-norm cases as the criteria
for assessment. Based on our framework, we assess the Gaussian and Exponential
mechanisms by comparing the magnitude of additive noise required by these
mechanisms and the lower bounds (criteria). We first conclude that the Gaussian
mechanism is indeed an appropriate option to certify $\ell_2$-norm robustness.
Surprisingly, we show that the Gaussian mechanism is also an appropriate option
for certifying $\ell_\infty$-norm robustness, instead of the Exponential
mechanism. Finally, we generalize our framework to $\ell_p$-norm for any
$p\geq2$. Our theoretical findings are verified by evaluations on CIFAR10 and
ImageNet.