毒性攻撃に特化した内容

Spider-Sense: Intrinsic Risk Sensing for Efficient Agent Defense with Hierarchical Adaptive Screening

Authors: Zhenxiong Yu, Zhi Yang, Zhiheng Jin, Shuhe Wang, Heng Zhang, Yanlin Fei, Lingfeng Zeng, Fangqi Lou, Shuo Zhang, Tu Hu, Jingping Liu, Rongze Chen, Xingyu Zhu, Kunyi Wang, Chaofa Yuan, Xin Guo, Zhaowei Liu, Feipeng Zhang, Jie Huang, Huacan Wang, Ronghao Chen, Liwen Zhang | Published: 2026-02-05
セキュリティメトリック
攻撃手法の説明
毒性攻撃に特化した内容

De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks

Authors: Jian Chen, Xuxin Zhang, Rui Zhang, Chen Wang, Ling Liu | Published: 2021-05-08
ポイズニング
毒性攻撃に特化した内容
生成モデルの課題

Being Single Has Benefits. Instance Poisoning to Deceive Malware Classifiers

Authors: Tzvika Shapira, David Berend, Ishai Rosenberg, Yang Liu, Asaf Shabtai, Yuval Elovici | Published: 2020-10-30
バックドア攻撃
マルウェア検出
毒性攻撃に特化した内容

Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks

Authors: Uday Shankar Shanthamallu, Jayaraman J. Thiagarajan, Andreas Spanias | Published: 2020-09-30
GNN
ポイズニング
毒性攻撃に特化した内容

A black-box adversarial attack for poisoning clustering

Authors: Antonio Emanuele Cinà, Alessandro Torcinovich, Marcello Pelillo | Published: 2020-09-09 | Updated: 2021-11-10
バックドア攻撃
ポイズニング
毒性攻撃に特化した内容

Defending Regression Learners Against Poisoning Attacks

Authors: Sandamal Weerasinghe, Sarah M. Erfani, Tansu Alpcan, Christopher Leckie, Justin Kopacz | Published: 2020-08-21
バックドア攻撃
ポイズニング
毒性攻撃に特化した内容

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

Authors: Xiang Zhang, Marinka Zitnik | Published: 2020-06-15 | Updated: 2020-10-28
グラフニューラルネットワーク
敵対的攻撃
毒性攻撃に特化した内容

Dynamic Backdoor Attacks Against Machine Learning Models

Authors: Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, Yang Zhang | Published: 2020-03-07 | Updated: 2022-03-03
ポイズニング
毒性攻撃に特化した内容
防御手法

Can’t Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks

Authors: Moshe Kravchik, Asaf Shabtai | Published: 2020-02-07
ポイズニング
ロバスト性向上手法
毒性攻撃に特化した内容

Regularization Helps with Mitigating Poisoning Attacks: Distributionally-Robust Machine Learning Using the Wasserstein Distance

Authors: Farhad Farokhi | Published: 2020-01-29
ロバスト性向上手法
毒性攻撃に特化した内容
連続的な線形関数