AIにより推定されたラベル
※ こちらのラベルはAIによって自動的に追加されました。そのため、正確でないことがあります。
詳細は文献データベースについてをご覧ください。
Abstract
We apply concepts from manifold regularization to develop new regularization techniques for training locally stable deep neural networks. Our regularizers are based on a sparsification of the graph Laplacian which holds with high probability when the data is sparse in high dimensions, as is common in deep learning. Empirically, our networks exhibit stability in a diverse set of perturbation models, including ℓ2, ℓ∞, and Wasserstein-based perturbations; in particular, we achieve 40 against an adaptive PGD attack using ℓ∞ perturbations of size ϵ = 8/255, and state-of-the-art verified accuracy of 21 perturbation model. Furthermore, our techniques are efficient, incurring overhead on par with two additional parallel forward passes through the network.