Modern machine learning models with very high accuracy have been shown to be
vulnerable to small, adversarially chosen perturbations of the input. Given
black-box access to a high-accuracy classifier $f$, we show how to construct a
new classifier $g$ that has high accuracy and is also robust to adversarial
$\ell_2$-bounded perturbations. Our algorithm builds upon the framework of
\textit{randomized smoothing} that has been recently shown to outperform all
previous defenses against $\ell_2$-bounded adversaries. Using techniques like
random partitions and doubling dimension, we are able to bound the adversarial
error of $g$ in terms of the optimum error. In this paper we focus on our
conceptual contribution, but we do present two examples to illustrate our
framework. We will argue that, under some assumptions, our bounds are optimal
for these cases.