AIセキュリティポータルbot

Towards Backdoor Attacks and Defense in Robust Machine Learning Models

Authors: Ezekiel Soremekun, Sakshi Udeshi, Sudipta Chattopadhyay | Published: 2020-02-25 | Updated: 2023-01-11
バックドア攻撃
ポイズニング
ロバスト性評価

Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color Space

Authors: Camilo Pestana, Naveed Akhtar, Wei Liu, David Glance, Ajmal Mian | Published: 2020-02-25
ロバスト性評価
敵対的学習
防御手法

HYDRA: Pruning Adversarially Robust Neural Networks

Authors: Vikash Sehwag, Shiqi Wang, Prateek Mittal, Suman Jana | Published: 2020-02-24 | Updated: 2020-11-10
ロバスト性評価
敵対的訓練
最適化問題

Approximate Data Deletion from Machine Learning Models

Authors: Zachary Izzo, Mary Anne Smart, Kamalika Chaudhuri, James Zou | Published: 2020-02-24 | Updated: 2021-02-23
マシン・アンラーニング
モデル評価
ロバスト性に関する評価

Stealing Black-Box Functionality Using The Deep Neural Tree Architecture

Authors: Daniel Teitelman, Itay Naeh, Shie Mannor | Published: 2020-02-23
トレーニングデータ抽出手法
トレーニング手法
機械学習手法

An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks in Federated Learning

Authors: Xue Yang, Yan Feng, Weijun Fang, Jun Shao, Xiaohu Tang, Shu-Tao Xia, Rongxing Lu | Published: 2020-02-23 | Updated: 2021-08-15
プライバシー保護メカニズム
連合学習
防御手法

Neuron Shapley: Discovering the Responsible Neurons

Authors: Amirata Ghorbani, James Zou | Published: 2020-02-23 | Updated: 2020-11-13
性能評価
特徴重要度分析
脆弱性予測

Non-Intrusive Detection of Adversarial Deep Learning Attacks via Observer Networks

Authors: Kirthi Shankar Sivamani, Rajeev Sahay, Aly El Gamal | Published: 2020-02-22
性能評価
敵対的訓練
防御手法

Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers

Authors: Chen Zhu, Renkun Ni, Ping-yeh Chiang, Hengduo Li, Furong Huang, Tom Goldstein | Published: 2020-02-22
ロバスト性評価
最適化問題
正則化

Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples

Authors: Guanxiong Liu, Issa Khalil, Abdallah Khreishah | Published: 2020-02-22 | Updated: 2020-02-27
性能評価
敵対的サンプル
敵対的訓練