ポイズニング

Adversarially robust generalization theory via Jacobian regularization for deep neural networks

Authors: Dongya Wu, Xin Li | Published: 2024-12-17
ポイズニング
敵対的サンプル

GLL: A Differentiable Graph Learning Layer for Neural Networks

Authors: Jason Brown, Bohan Chen, Harris Hardiman-Mostow, Jeff Calder, Andrea L. Bertozzi | Published: 2024-12-11
ポイズニング
敵対的訓練

Optimal Defenses Against Gradient Reconstruction Attacks

Authors: Yuxiao Chen, Gamze Gürsoy, Qi Lei | Published: 2024-11-06
ポイズニング
防御手法

FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses

Authors: Isaac Baglin, Xiatian Zhu, Simon Hadfield | Published: 2024-11-05 | Updated: 2025-01-05
ポイズニング
攻撃の評価
評価手法

Federated Learning in Practice: Reflections and Projections

Authors: Katharine Daly, Hubert Eichner, Peter Kairouz, H. Brendan McMahan, Daniel Ramage, Zheng Xu | Published: 2024-10-11
プライバシー保護
プライバシー保護手法
ポイズニング

PoisonBench: Assessing Large Language Model Vulnerability to Data Poisoning

Authors: Tingchen Fu, Mrinank Sharma, Philip Torr, Shay B. Cohen, David Krueger, Fazl Barez | Published: 2024-10-11
LLM性能評価
バックドア攻撃
ポイズニング

Data Taggants: Dataset Ownership Verification via Harmless Targeted Data Poisoning

Authors: Wassim Bouaziz, El-Mahdi El-Mhamdi, Nicolas Usunier | Published: 2024-10-09
ポイズニング

CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models

Authors: Songning Lai, Jiayu Yang, Yu Huang, Lijie Hu, Tianlang Xue, Zhangyi Hu, Jiaxu Li, Haicheng Liao, Yutao Yue | Published: 2024-10-07
バックドア攻撃
ポイズニング

Federated Learning Nodes Can Reconstruct Peers’ Image Data

Authors: Ethan Wilson, Kai Yue, Chau-Wai Wong, Huaiyu Dai | Published: 2024-10-07
プライバシー保護
ポイズニング

Empirical Perturbation Analysis of Linear System Solvers from a Data Poisoning Perspective

Authors: Yixin Liu, Arielle Carr, Lichao Sun | Published: 2024-10-01
バックドア攻撃
ポイズニング
線形ソルバー