Time Traveling to Defend Against Adversarial Example Attacks in Image Classification Authors: Anthony Etim, Jakub Szefer | Published: 2024-10-10 Attack MethodAdversarial ExampleDefense Method 2024.10.10 2025.05.27 Literature Database
Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems Authors: Donghyun Lee, Mo Tiwari | Published: 2024-10-09 Prompt InjectionAttack MethodDefense Method 2024.10.09 2025.05.27 Literature Database
SecAlign: Defending Against Prompt Injection with Preference Optimization Authors: Sizhe Chen, Arman Zharmagambetov, Saeed Mahloujifar, Kamalika Chaudhuri, David Wagner, Chuan Guo | Published: 2024-10-07 | Updated: 2025-01-13 LLM SecurityPrompt InjectionDefense Method 2024.10.07 2025.05.27 Literature Database
SoK: Towards Security and Safety of Edge AI Authors: Tatjana Wingarz, Anne Lauscher, Janick Edinger, Dominik Kaaser, Stefan Schulte, Mathias Fischer | Published: 2024-10-07 BiasPrivacy ProtectionDefense Method 2024.10.07 2025.05.27 Literature Database
Robustness Reprogramming for Representation Learning Authors: Zhichao Hou, MohamadAli Torkamani, Hamid Krim, Xiaorui Liu | Published: 2024-10-06 Attack EvaluationDefense Method 2024.10.06 2025.05.27 Literature Database
Enhancing Robustness of Graph Neural Networks through p-Laplacian Authors: Anuj Kumar Sirohi, Subhanu Halder, Kabir Kumar, Sandeep Kumar | Published: 2024-09-27 Optimization ProblemDefense Method 2024.09.27 2025.05.27 Literature Database
Obliviate: Neutralizing Task-agnostic Backdoors within the Parameter-efficient Fine-tuning Paradigm Authors: Jaehan Kim, Minkyoo Song, Seung Ho Na, Seungwon Shin | Published: 2024-09-21 | Updated: 2024-10-06 Backdoor AttackModel Performance EvaluationDefense Method 2024.09.21 2025.05.27 Literature Database
Defending against Model Inversion Attacks via Random Erasing Authors: Viet-Hung Tran, Ngoc-Bao Nguyen, Son T. Mai, Hans Vandierendonck, Ngai-man Cheung | Published: 2024-09-02 WatermarkingPrivacy Protection MethodDefense Method 2024.09.02 2025.05.27 Literature Database
EEG-Defender: Defending against Jailbreak through Early Exit Generation of Large Language Models Authors: Chongwen Zhao, Zhihao Dou, Kaizhu Huang | Published: 2024-08-21 LLM SecurityPrompt InjectionDefense Method 2024.08.21 2025.05.27 Literature Database
Robust Image Classification: Defensive Strategies against FGSM and PGD Adversarial Attacks Authors: Hetvi Waghela, Jaydip Sen, Sneha Rakshit | Published: 2024-08-20 PoisoningAdversarial ExampleDefense Method 2024.08.20 2025.05.27 Literature Database