These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Machine learning models have demonstrated remarkable success across diverse
domains but remain vulnerable to adversarial attacks. Empirical defense
mechanisms often fail, as new attacks constantly emerge, rendering existing
defenses obsolete, shifting the focus to certification-based defenses.
Randomized smoothing has emerged as a promising technique among notable
advancements. This study reviews the theoretical foundations and empirical
effectiveness of randomized smoothing and its derivatives in verifying machine
learning classifiers from a perspective of scalability. We provide an in-depth
exploration of the fundamental concepts underlying randomized smoothing,
highlighting its theoretical guarantees in certifying robustness against
adversarial perturbations and discuss the challenges of existing methodologies.