Optimizing Adaptive Attacks against Content Watermarks for Language Models Authors: Abdulrahman Diaa, Toluwani Aremu, Nils Lukas | Published: 2024-10-03 LLM SecurityWatermarkingPrompt Injection 2024.10.03 2025.05.12 Literature Database
Robust LLM safeguarding via refusal feature adversarial training Authors: Lei Yu, Virginie Do, Karen Hambardzumyan, Nicola Cancedda | Published: 2024-09-30 | Updated: 2025-03-20 Prompt InjectionModel RobustnessAdversarial Learning 2024.09.30 2025.05.12 Literature Database
System-Level Defense against Indirect Prompt Injection Attacks: An Information Flow Control Perspective Authors: Fangzhou Wu, Ethan Cecchetti, Chaowei Xiao | Published: 2024-09-27 | Updated: 2024-10-10 LLM SecurityPrompt InjectionExecution Trace Interference 2024.09.27 2025.05.12 Literature Database
An Adversarial Perspective on Machine Unlearning for AI Safety Authors: Jakub Łucki, Boyi Wei, Yangsibo Huang, Peter Henderson, Florian Tramèr, Javier Rando | Published: 2024-09-26 | Updated: 2025-04-10 Prompt InjectionSafety AlignmentMachine Unlearning 2024.09.26 2025.05.12 Literature Database
Weak-to-Strong Backdoor Attack for Large Language Models Authors: Shuai Zhao, Leilei Gan, Zhongliang Guo, Xiaobao Wu, Luwei Xiao, Xiaoyu Xu, Cong-Duy Nguyen, Luu Anh Tuan | Published: 2024-09-26 | Updated: 2024-10-13 Backdoor AttackPrompt Injection 2024.09.26 2025.05.12 Literature Database
MoJE: Mixture of Jailbreak Experts, Naive Tabular Classifiers as Guard for Prompt Attacks Authors: Giandomenico Cornacchia, Giulio Zizzo, Kieran Fraser, Muhammad Zaid Hameed, Ambrish Rawat, Mark Purcell | Published: 2024-09-26 | Updated: 2024-10-04 Guardrail MethodContent ModerationPrompt Injection 2024.09.26 2025.05.12 Literature Database
PathSeeker: Exploring LLM Security Vulnerabilities with a Reinforcement Learning-Based Jailbreak Approach Authors: Zhihao Lin, Wei Ma, Mingyi Zhou, Yanjie Zhao, Haoyu Wang, Yang Liu, Jun Wang, Li Li | Published: 2024-09-21 | Updated: 2024-10-03 LLM Performance EvaluationPrompt Injection 2024.09.21 2025.05.12 Literature Database
LLM Honeypot: Leveraging Large Language Models as Advanced Interactive Honeypot Systems Authors: Hakan T. Otal, M. Abdullah Canbaz | Published: 2024-09-12 | Updated: 2024-09-15 LLM SecurityCybersecurityPrompt Injection 2024.09.12 2025.05.12 Literature Database
Exploring LLMs for Malware Detection: Review, Framework Design, and Countermeasure Approaches Authors: Jamal Al-Karaki, Muhammad Al-Zafar Khan, Marwan Omar | Published: 2024-09-11 LLM SecurityPrompt InjectionMalware Classification 2024.09.11 2025.05.12 Literature Database
CLNX: Bridging Code and Natural Language for C/C++ Vulnerability-Contributing Commits Identification Authors: Zeqing Qin, Yiwei Wu, Lansheng Han | Published: 2024-09-11 LLM Performance EvaluationProgram AnalysisPrompt Injection 2024.09.11 2025.05.12 Literature Database