敵対的サンプル

Feedback Learning for Improving the Robustness of Neural Networks

Authors: Chang Song, Zuoguan Wang, Hai Li | Published: 2019-09-12
クラス不均衡
攻撃手法
敵対的サンプル

Localized Adversarial Training for Increased Accuracy and Robustness in Image Classification

Authors: Eitan Rothberg, Tingting Chen, Luo Jie, Hao Ji | Published: 2019-09-10
敵対的サンプル
背景ピクセル攻撃
適応型敵対的訓練

Effectiveness of Adversarial Examples and Defenses for Malware Classification

Authors: Robert Podschwadt, Hassan Takabi | Published: 2019-09-10
攻撃手法
敵対的サンプル
適応型敵対的訓練

Learning to Disentangle Robust and Vulnerable Features for Adversarial Detection

Authors: Byunggill Joe, Sung Ju Hwang, Insik Shin | Published: 2019-09-10
敵対的サンプル
敵対的サンプルの検知
敵対的訓練

BOSH: An Efficient Meta Algorithm for Decision-based Attacks

Authors: Zhenxin Xiao, Puyudi Yang, Yuchen Jiang, Kai-Wei Chang, Cho-Jui Hsieh | Published: 2019-09-10 | Updated: 2019-10-14
敵対的サンプル
敵対的サンプルの検知
敵対的訓練

When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures

Authors: Gil Fidel, Ron Bitton, Asaf Shabtai | Published: 2019-09-08
ポイズニング
敵対的サンプル
敵対的サンプルの検知

On the Need for Topology-Aware Generative Models for Manifold-Based Defenses

Authors: Uyeong Jang, Susmit Jha, Somesh Jha | Published: 2019-09-07 | Updated: 2020-02-17
トポロジー解析
敵対的サンプル
機械学習

Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation

Authors: Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, Pushmeet Kohli | Published: 2019-09-03 | Updated: 2019-12-20
学習の改善
敵対的サンプル
敵対的サンプルの脆弱性

High Accuracy and High Fidelity Extraction of Neural Networks

Authors: Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot | Published: 2019-09-03 | Updated: 2020-03-03
モデル抽出攻撃
モデル評価
敵対的サンプル

Universal, transferable and targeted adversarial attacks

Authors: Junde Wu, Rao Fu | Published: 2019-08-29 | Updated: 2022-06-13
ポイズニング
敵対的サンプル
敵対的攻撃検出