Transferring Robustness for Graph Neural Network Against Poisoning Attacks Authors: Xianfeng Tang, Yandong Li, Yiwei Sun, Huaxiu Yao, Prasenjit Mitra, Suhang Wang | Published: 2019-08-20 | Updated: 2020-02-26 ポイズニング堅牢性向上手法毒性攻撃に特化した内容 2019.08.20 2025.04.03 文献データベース
Model Agnostic Defence against Backdoor Attacks in Machine Learning Authors: Sakshi Udeshi, Shanshan Peng, Gerald Woo, Lionell Loh, Louth Rawshan, Sudipta Chattopadhyay | Published: 2019-08-06 | Updated: 2022-03-31 バックドア攻撃攻撃の評価毒性攻撃に特化した内容 2019.08.06 2025.04.03 文献データベース
Is feature selection secure against training data poisoning? Authors: Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, Fabio Roli | Published: 2018-04-21 ポイズニング毒データの検知毒性攻撃に特化した内容 2018.04.21 2025.04.03 文献データベース