対抗的学習

ODE guided Neural Data Augmentation Techniques for Time Series Data and its Benefits on Robustness

Authors: Anindya Sarkar, Anirudh Sunder Raj, Raghu Sesha Iyengar | Published: 2019-10-15 | Updated: 2020-09-27
データ拡張技術
モデルの堅牢性
対抗的学習

Partially Encrypted Machine Learning using Functional Encryption

Authors: Theo Ryffel, Edouard Dufour-Sans, Romain Gay, Francis Bach, David Pointcheval | Published: 2019-05-24 | Updated: 2021-09-23
プライバシー手法
モデル性能評価
対抗的学習

Learning More Robust Features with Adversarial Training

Authors: Shuangtao Li, Yuanke Chen, Yanlin Peng, Lin Bai | Published: 2018-04-20
対抗的学習
敵対的学習
透かし技術

Adversarial Risk and the Dangers of Evaluating Against Weak Attacks

Authors: Jonathan Uesato, Brendan O'Donoghue, Aaron van den Oord, Pushmeet Kohli | Published: 2018-02-15 | Updated: 2018-06-12
対抗的学習
敵対的学習
敵対的攻撃

Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification

Authors: Xiaoyu Cao, Neil Zhenqiang Gong | Published: 2017-09-17 | Updated: 2019-12-31
モデルの頑健性保証
対抗的学習
敵対的サンプルの検知

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

Authors: Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh | Published: 2017-09-13 | Updated: 2018-02-10
モデルの頑健性保証
対抗的学習
敵対的サンプル

Towards Proving the Adversarial Robustness of Deep Neural Networks

Authors: Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | Published: 2017-09-08
モデルの頑健性保証
ロバスト性向上
対抗的学習

Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks

Authors: Yi Han, Benjamin I. P. Rubinstein | Published: 2017-04-06 | Updated: 2017-05-25
ポイズニング
モデルの頑健性保証
対抗的学習

Comment on “Biologically inspired protection of deep networks from adversarial attacks”

Authors: Wieland Brendel, Matthias Bethge | Published: 2017-04-05
トリガーの検知
モデルの頑健性保証
対抗的学習