ポイズニング

Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized Scaled Prediction Consistency

Authors: Soumyadeep Pal, Yuguang Yao, Ren Wang, Bingquan Shen, Sijia Liu | Published: 2024-03-15
ウォーターマーキング
バックドア攻撃
ポイズニング

Visual Privacy Auditing with Diffusion Models

Authors: Kristian Schwethelm, Johannes Kaiser, Moritz Knolle, Daniel Rueckert, Georgios Kaissis, Alexander Ziller | Published: 2024-03-12
ウォーターマーキング
ポイズニング
再構築耐久性

Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code

Authors: Cristina Improta | Published: 2024-03-11
セキュリティ分析
バックドア攻撃
ポイズニング

Provable Mutual Benefits from Federated Learning in Privacy-Sensitive Domains

Authors: Nikita Tsoy, Anna Mihalkova, Teodora Todorova, Nikola Konstantinov | Published: 2024-03-11 | Updated: 2024-11-07
ポイズニング
最適化問題
連合学習

Fake or Compromised? Making Sense of Malicious Clients in Federated Learning

Authors: Hamid Mozaffari, Sunav Choudhary, Amir Houmansadr | Published: 2024-03-10
バックドア攻撃
ポイズニング
悪意のあるクライアント

Enhancing Security in Federated Learning through Adaptive Consensus-Based Model Update Validation

Authors: Zahir Alsulaimawi | Published: 2024-03-05
ポイズニング
連合学習
防御手法

Mitigating Label Flipping Attacks in Malicious URL Detectors Using Ensemble Trees

Authors: Ehsan Nowroozi, Nada Jadalla, Samaneh Ghelichkhani, Alireza Jolfaei | Published: 2024-03-05
バックドア攻撃
ポイズニング
防御手法

Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer Networks

Authors: Ehsan Nowroozi, Imran Haider, Rahim Taheri, Mauro Conti | Published: 2024-03-05
バックドア攻撃
ポイズニング
連合学習

Enhancing Data Provenance and Model Transparency in Federated Learning Systems — A Database Approach

Authors: Michael Gu, Ramasoumya Naraparaju, Dongfang Zhao | Published: 2024-03-03
データの起源と変遷
ポイズニング
連合学習

Analysis of Privacy Leakage in Federated Large Language Models

Authors: Minh N. Vu, Truc Nguyen, Tre' R. Jeter, My T. Thai | Published: 2024-03-02
プライバシー保護手法
ポイズニング
連合学習