AIセキュリティポータルbot

Defending SVMs against Poisoning Attacks: the Hardness and DBSCAN Approach

Authors: Hu Ding, Fan Yang, Jiawei Huang | Published: 2020-06-14 | Updated: 2021-02-20
アルゴリズム
ポイズニング
機械学習の基礎

Defensive Approximation: Securing CNNs using Approximate Computing

Authors: Amira Guesmi, Ihsen Alouani, Khaled Khasawneh, Mouna Baklouti, Tarek Frikha, Mohamed Abid, Nael Abu-Ghazaleh | Published: 2020-06-13 | Updated: 2021-07-29
敵対的サンプル
敵対的攻撃検出
近似計算

Rethinking Clustering for Robustness

Authors: Motasem Alfarra, Juan C. Pérez, Adel Bibi, Ali Thabet, Pablo Arbeláez, Bernard Ghanem | Published: 2020-06-13 | Updated: 2021-11-19
学習の改善
将来の研究
機械学習の基礎

Adversarial Self-Supervised Contrastive Learning

Authors: Minseon Kim, Jihoon Tack, Sung Ju Hwang | Published: 2020-06-13 | Updated: 2020-10-26
パフォーマンス評価
ポイズニング
敵対的攻撃検出

Leakage of Dataset Properties in Multi-Party Machine Learning

Authors: Wanrong Zhang, Shruti Tople, Olga Ohrimenko | Published: 2020-06-12 | Updated: 2021-06-17
プライバシー損失分析
メンバーシップ推論
攻撃タイプ

Backdoor Attacks on Federated Meta-Learning

Authors: Chien-Lun Chen, Leana Golubchik, Marco Paolieri | Published: 2020-06-12 | Updated: 2020-12-16
バックドア攻撃
ポイズニング
連合学習

Provably Robust Metric Learning

Authors: Lu Wang, Xuanqing Liu, Jinfeng Yi, Yuan Jiang, Cho-Jui Hsieh | Published: 2020-06-12 | Updated: 2020-12-19
アルゴリズム
敵対的攻撃検出
最適化手法

Robustness to Adversarial Attacks in Learning-Enabled Controllers

Authors: Zikang Xiong, Joe Eappen, He Zhu, Suresh Jagannathan | Published: 2020-06-11
安全性特性
攻撃タイプ
敵対的攻撃検出

Backdoors in Neural Models of Source Code

Authors: Goutham Ramakrishnan, Aws Albarghouthi | Published: 2020-06-11
バックドア攻撃
プログラム解析
ポイズニング

On the Tightness of Semidefinite Relaxations for Certifying Robustness to Adversarial Examples

Authors: Richard Y. Zhang | Published: 2020-06-11 | Updated: 2020-10-26
アルゴリズム
安全性特性
機械学習の基礎