These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Local robustness verification can verify that a neural network is robust wrt.
any perturbation to a specific input within a certain distance. We call this
distance Robustness Radius. We observe that the robustness radii of correctly
classified inputs are much larger than that of misclassified inputs which
include adversarial examples, especially those from strong adversarial attacks.
Another observation is that the robustness radii of correctly classified inputs
often follow a normal distribution. Based on these two observations, we propose
to validate inputs for neural networks via runtime local robustness
verification. Experiments show that our approach can protect neural networks
from adversarial examples and improve their accuracies.