敵対的サンプル

Feature Averaging: An Implicit Bias of Gradient Descent Leading to Non-Robustness in Neural Networks

Authors: Binghui Li, Zhixuan Pan, Kaifeng Lyu, Jian Li | Published: 2024-10-14
収束分析
敵対的サンプル

Minimax rates of convergence for nonparametric regression under adversarial attacks

Authors: Jingfu Peng, Yuhong Yang | Published: 2024-10-12
敵対的サンプル
敵対的訓練

Time Traveling to Defend Against Adversarial Example Attacks in Image Classification

Authors: Anthony Etim, Jakub Szefer | Published: 2024-10-10
攻撃手法
敵対的サンプル
防御手法

LOTOS: Layer-wise Orthogonalization for Training Robust Ensembles

Authors: Ali Ebrahimpour-Boroojeny, Hari Sundaram, Varun Chandrasekaran | Published: 2024-10-07
敵対的サンプル
敵対的訓練

Impact of White-Box Adversarial Attacks on Convolutional Neural Networks

Authors: Rakesh Podder, Sudipto Ghosh | Published: 2024-10-02
モデル性能評価
攻撃手法
敵対的サンプル

On Using Certified Training towards Empirical Robustness

Authors: Alessandro De Palma, Serge Durand, Zakaria Chihani, François Terrier, Caterina Urban | Published: 2024-10-02 | Updated: 2025-03-24
敵対的サンプル
正則化

Boosting Certified Robustness for Time Series Classification with Efficient Self-Ensemble

Authors: Chang Dong, Zhengyang Li, Liangwei Zheng, Weitong Chen, Wei Emma Zhang | Published: 2024-09-04 | Updated: 2024-09-19
敵対的サンプル
評価手法
透かし評価

Adversarial Attacks on Machine Learning-Aided Visualizations

Authors: Takanori Fujiwara, Kostiantyn Kucher, Junpeng Wang, Rafael M. Martins, Andreas Kerren, Anders Ynnerman | Published: 2024-09-04 | Updated: 2024-09-24
バックドア攻撃
敵対的サンプル
視覚化の脆弱性

Comprehensive Botnet Detection by Mitigating Adversarial Attacks, Navigating the Subtleties of Perturbation Distances and Fortifying Predictions with Conformal Layers

Authors: Rahul Yumlembam, Biju Issac, Seibu Mary Jacob, Longzhi Yang | Published: 2024-09-01
ポイズニング
敵対的サンプル
評価手法

Robust Image Classification: Defensive Strategies against FGSM and PGD Adversarial Attacks

Authors: Hetvi Waghela, Jaydip Sen, Sneha Rakshit | Published: 2024-08-20
ポイズニング
敵対的サンプル
防御手法