敵対的サンプル

Select Me! When You Need a Tool: A Black-box Text Attack on Tool Selection

Authors: Liuji Chen, Hao Gao, Jinghao Zhang, Qiang Liu, Shu Wu, Liang Wang | Published: 2025-04-07
プロンプトリーキング
情報セキュリティ
敵対的サンプル

Adv-CPG: A Customized Portrait Generation Framework with Facial Adversarial Attacks

Authors: Junying Wang, Hongyuan Zhang, Yuan Yuan | Published: 2025-03-11
プライバシー保護
敵対的サンプル
顔認識システム

Adversarial Robustness in Two-Stage Learning-to-Defer: Algorithms and Guarantees

Authors: Yannis Montreuil, Axel Carlier, Lai Xing Ng, Wei Tsang Ooi | Published: 2025-02-03
Learning-to-Defer
敵対的サンプル
敵対的訓練

Differentiable Adversarial Attacks for Marked Temporal Point Processes

Authors: Pritish Chakraborty, Vinayak Gupta, Rahul R, Srikanta J. Bedathur, Abir De | Published: 2025-01-17
敵対的サンプル
最適化問題

CaFA: Cost-aware, Feasible Attacks With Database Constraints Against Neural Tabular Classifiers

Authors: Matan Ben-Tov, Daniel Deutch, Nave Frost, Mahmood Sharif | Published: 2025-01-17
データ整合性制約
実験的検証
敵対的サンプル

Image-based Multimodal Models as Intruders: Transferable Multimodal Attacks on Video-based MLLMs

Authors: Linhao Huang, Xue Jiang, Zhiqiang Wang, Wentao Mo, Xi Xiao, Bo Han, Yongjie Yin, Feng Zheng | Published: 2025-01-02 | Updated: 2025-01-10
攻撃の評価
攻撃手法
敵対的サンプル

Standard-Deviation-Inspired Regularization for Improving Adversarial Robustness

Authors: Olukorede Fakorede, Modeste Atsague, Jin Tian | Published: 2024-12-27
敵対的サンプル
敵対的訓練

Adversarially robust generalization theory via Jacobian regularization for deep neural networks

Authors: Dongya Wu, Xin Li | Published: 2024-12-17
ポイズニング
敵対的サンプル

CausAdv: A Causal-based Framework for Detecting Adversarial Examples

Authors: Hichem Debbi | Published: 2024-10-29
フレームワーク
敵対的サンプル

Integrating uncertainty quantification into randomized smoothing based robustness guarantees

Authors: Sina Däubener, Kira Maag, David Krueger, Asja Fischer | Published: 2024-10-27
敵対的サンプル
等価性評価