AIセキュリティポータルbot

Feedback Learning for Improving the Robustness of Neural Networks

Authors: Chang Song, Zuoguan Wang, Hai Li | Published: 2019-09-12
クラス不均衡
攻撃手法
敵対的サンプル

Learning-Guided Network Fuzzing for Testing Cyber-Physical System Defences

Authors: Yuqi Chen, Christopher M. Poskitt, Jun Sun, Sridhar Adepu, Fan Zhang | Published: 2019-09-12
センサー状態推定
攻撃手法
機械学習の応用

Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging

Authors: Luis Muñoz-González, Kenneth T. Co, Emil C. Lupu | Published: 2019-09-11
悪意のあるノード検出
機械学習のプライバシー保護

Structural Robustness for Deep Learning Architectures

Authors: Carlos Lassance, Vincent Gripon, Jian Tang, Antonio Ortega | Published: 2019-09-11
攻撃手法
機械学習の応用
機械学習手法

Sparse and Imperceivable Adversarial Attacks

Authors: Francesco Croce, Matthias Hein | Published: 2019-09-11
ポイズニング
攻撃手法
機械学習手法

PDA: Progressive Data Augmentation for General Robustness of Deep Neural Networks

Authors: Hang Yu, Aishan Liu, Xianglong Liu, Gengchao Li, Ping Luo, Ran Cheng, Jichen Yang, Chongzhi Zhang | Published: 2019-09-11 | Updated: 2020-02-24
ポイズニング
モデルの堅牢性
攻撃手法

Identifying and Resisting Adversarial Videos Using Temporal Consistency

Authors: Xiaojun Jia, Xingxing Wei, Xiaochun Cao | Published: 2019-09-11
敵対的スペクトル攻撃検出
機械学習手法

Localized Adversarial Training for Increased Accuracy and Robustness in Image Classification

Authors: Eitan Rothberg, Tingting Chen, Luo Jie, Hao Ji | Published: 2019-09-10
敵対的サンプル
背景ピクセル攻撃
適応型敵対的訓練

Effectiveness of Adversarial Examples and Defenses for Malware Classification

Authors: Robert Podschwadt, Hassan Takabi | Published: 2019-09-10
攻撃手法
敵対的サンプル
適応型敵対的訓練

Byzantine-Resilient Stochastic Gradient Descent for Distributed Learning: A Lipschitz-Inspired Coordinate-wise Median Approach

Authors: Haibo Yang, Xin Zhang, Minghong Fang, Jia Liu | Published: 2019-09-10
ビザンチン攻撃対策
収束保証
計算効率