Poisoning

Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized Scaled Prediction Consistency

Authors: Soumyadeep Pal, Yuguang Yao, Ren Wang, Bingquan Shen, Sijia Liu | Published: 2024-03-15
Watermarking
Backdoor Attack
Poisoning

Visual Privacy Auditing with Diffusion Models

Authors: Kristian Schwethelm, Johannes Kaiser, Moritz Knolle, Daniel Rueckert, Georgios Kaissis, Alexander Ziller | Published: 2024-03-12
Watermarking
Poisoning
Reconstruction Durability

Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code

Authors: Cristina Improta | Published: 2024-03-11
Security Analysis
Backdoor Attack
Poisoning

Provable Mutual Benefits from Federated Learning in Privacy-Sensitive Domains

Authors: Nikita Tsoy, Anna Mihalkova, Teodora Todorova, Nikola Konstantinov | Published: 2024-03-11 | Updated: 2024-11-07
Poisoning
Optimization Problem
Federated Learning

Fake or Compromised? Making Sense of Malicious Clients in Federated Learning

Authors: Hamid Mozaffari, Sunav Choudhary, Amir Houmansadr | Published: 2024-03-10
Backdoor Attack
Poisoning
Malicious Client

Enhancing Security in Federated Learning through Adaptive Consensus-Based Model Update Validation

Authors: Zahir Alsulaimawi | Published: 2024-03-05
Poisoning
Federated Learning
Defense Method

Mitigating Label Flipping Attacks in Malicious URL Detectors Using Ensemble Trees

Authors: Ehsan Nowroozi, Nada Jadalla, Samaneh Ghelichkhani, Alireza Jolfaei | Published: 2024-03-05
Backdoor Attack
Poisoning
Defense Method

Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer Networks

Authors: Ehsan Nowroozi, Imran Haider, Rahim Taheri, Mauro Conti | Published: 2024-03-05
Backdoor Attack
Poisoning
Federated Learning

Enhancing Data Provenance and Model Transparency in Federated Learning Systems — A Database Approach

Authors: Michael Gu, Ramasoumya Naraparaju, Dongfang Zhao | Published: 2024-03-03
Data Origins and Evolution
Poisoning
Federated Learning

Analysis of Privacy Leakage in Federated Large Language Models

Authors: Minh N. Vu, Truc Nguyen, Tre' R. Jeter, My T. Thai | Published: 2024-03-02
Privacy Protection Method
Poisoning
Federated Learning