バックドア攻撃

Get a Model! Model Hijacking Attack Against Machine Learning Models

Authors: Ahmed Salem, Michael Backes, Yang Zhang | Published: 2021-11-08
データセット評価
バックドア攻撃
敵対的攻撃手法

Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks

Authors: Shawn Shan, Arjun Nitin Bhagoji, Haitao Zheng, Ben Y. Zhao | Published: 2021-10-13 | Updated: 2022-06-15
バックドア攻撃
フォレンジックレポート
敵対的攻撃手法

Evaluating Deep Learning Models and Adversarial Attacks on Accelerometer-Based Gesture Authentication

Authors: Elliu Huang, Fabio Di Troia, Mark Stamp | Published: 2021-10-03
バックドア攻撃
敵対的訓練
深層学習手法

Back in Black: A Comparative Evaluation of Recent State-Of-The-Art Black-Box Attacks

Authors: Kaleel Mahmood, Rigel Mahmood, Ethan Rathbun, Marten van Dijk | Published: 2021-09-29
バックドア攻撃
ポイズニング
敵対的攻撃

DeSMP: Differential Privacy-exploited Stealthy Model Poisoning Attacks in Federated Learning

Authors: Md Tamjid Hossain, Shafkat Islam, Shahriar Badsha, Haoting Shen | Published: 2021-09-21
バックドア攻撃
連合学習
防御メカニズム

Excess Capacity and Backdoor Poisoning

Authors: Naren Sarayu Manoj, Avrim Blum | Published: 2021-09-02 | Updated: 2021-11-03
データ汚染検出
バックドア攻撃
敵対的サンプル

Machine Unlearning of Features and Labels

Authors: Alexander Warnecke, Lukas Pirch, Christian Wressnegger, Konrad Rieck | Published: 2021-08-26 | Updated: 2023-08-07
バックドア攻撃
ポイズニング
機械学習手法

Advances in adversarial attacks and defenses in computer vision: A survey

Authors: Naveed Akhtar, Ajmal Mian, Navid Kardan, Mubarak Shah | Published: 2021-08-01 | Updated: 2021-09-02
バックドア攻撃
ロバスト性
敵対的サンプル

Can You Hear It? Backdoor Attacks via Ultrasonic Triggers

Authors: Stefanos Koffas, Jing Xu, Mauro Conti, Stjepan Picek | Published: 2021-07-30 | Updated: 2022-03-06
バックドア攻撃
敵対的攻撃
音声認識システムのセキュリティ

Accumulative Poisoning Attacks on Real-time Data

Authors: Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu | Published: 2021-06-18 | Updated: 2021-10-26
オンライン学習
バックドア攻撃
連合学習