敵対的サンプル

CausAdv: A Causal-based Framework for Detecting Adversarial Examples

Authors: Hichem Debbi | Published: 2024-10-29
フレームワーク
敵対的サンプル

Integrating uncertainty quantification into randomized smoothing based robustness guarantees

Authors: Sina Däubener, Kira Maag, David Krueger, Asja Fischer | Published: 2024-10-27
敵対的サンプル
等価性評価

Feature Averaging: An Implicit Bias of Gradient Descent Leading to Non-Robustness in Neural Networks

Authors: Binghui Li, Zhixuan Pan, Kaifeng Lyu, Jian Li | Published: 2024-10-14
収束分析
敵対的サンプル

Minimax rates of convergence for nonparametric regression under adversarial attacks

Authors: Jingfu Peng, Yuhong Yang | Published: 2024-10-12
敵対的サンプル
敵対的訓練

Time Traveling to Defend Against Adversarial Example Attacks in Image Classification

Authors: Anthony Etim, Jakub Szefer | Published: 2024-10-10
攻撃手法
敵対的サンプル
防御手法

LOTOS: Layer-wise Orthogonalization for Training Robust Ensembles

Authors: Ali Ebrahimpour-Boroojeny, Hari Sundaram, Varun Chandrasekaran | Published: 2024-10-07
敵対的サンプル
敵対的訓練

Impact of White-Box Adversarial Attacks on Convolutional Neural Networks

Authors: Rakesh Podder, Sudipto Ghosh | Published: 2024-10-02
モデル性能評価
攻撃手法
敵対的サンプル

On Using Certified Training towards Empirical Robustness

Authors: Alessandro De Palma, Serge Durand, Zakaria Chihani, François Terrier, Caterina Urban | Published: 2024-10-02 | Updated: 2025-03-24
敵対的サンプル
正則化

Boosting Certified Robustness for Time Series Classification with Efficient Self-Ensemble

Authors: Chang Dong, Zhengyang Li, Liangwei Zheng, Weitong Chen, Wei Emma Zhang | Published: 2024-09-04 | Updated: 2024-09-19
敵対的サンプル
評価手法
透かし評価

Adversarial Attacks on Machine Learning-Aided Visualizations

Authors: Takanori Fujiwara, Kostiantyn Kucher, Junpeng Wang, Rafael M. Martins, Andreas Kerren, Anders Ynnerman | Published: 2024-09-04 | Updated: 2024-09-24
バックドア攻撃
敵対的サンプル
視覚化の脆弱性