AIセキュリティポータルbot

Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification

Authors: Chuanshuai Chen, Jiazhu Dai | Published: 2020-07-11 | Updated: 2021-03-15
テキスト生成手法
バックドア攻撃
ポイズニング

Generating Adversarial Inputs Using A Black-box Differential Technique

Authors: João Batista Pereira Matos Juúnior, Lucas Carvalho Cordeiro, Marcelo d'Amorim, Xiaowei Huang | Published: 2020-07-10
性能評価
攻撃手法
敵対的サンプル

Differentially Private Simple Linear Regression

Authors: Daniel Alabi, Audra McMillan, Jayshree Sarathy, Adam Smith, Salil Vadhan | Published: 2020-07-10
ハイパーパラメータ調整
プライバシー評価
計算効率

Improving Adversarial Robustness by Enforcing Local and Global Compactness

Authors: Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung | Published: 2020-07-10
ポイズニング
性能評価
深層学習

Attack of the Tails: Yes, You Really Can Backdoor Federated Learning

Authors: Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, Dimitris Papailiopoulos | Published: 2020-07-09
ポイズニング
モデルの堅牢性
攻撃手法

Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs

Authors: Rana Abou Khamis, Ashraf Matrawy | Published: 2020-07-08
ポイズニング
性能低下の要因
敵対的訓練

On the relationship between class selectivity, dimensionality, and robustness

Authors: Matthew L. Leavitt, Ari S. Morcos | Published: 2020-07-08 | Updated: 2020-10-13
ポイズニング
敵対的学習
脆弱性分析

How benign is benign overfitting?

Authors: Amartya Sanyal, Puneet K Dokania, Varun Kanade, Philip H. S. Torr | Published: 2020-07-08
敵対的サンプル
敵対的学習
過剰適合と記憶化

BlockFLow: An Accountable and Privacy-Preserving Solution for Federated Learning

Authors: Vaikkunth Mugunthan, Ravi Rahman, Lalana Kagal | Published: 2020-07-08
パフォーマンス評価
プライバシー評価
攻撃パターン抽出

Defending against Backdoors in Federated Learning with Robust Learning Rate

Authors: Mustafa Safa Ozdayi, Murat Kantarcioglu, Yulia R. Gel | Published: 2020-07-07 | Updated: 2021-07-29
バックドア攻撃
敵対的学習
防御メカニズム