An Adversarial Perspective on Machine Unlearning for AI Safety Authors: Jakub Łucki, Boyi Wei, Yangsibo Huang, Peter Henderson, Florian Tramèr, Javier Rando | Published: 2024-09-26 | Updated: 2025-04-10 Prompt InjectionSafety AlignmentMachine Unlearning 2024.09.26 2025.05.27 Literature Database
Weak-to-Strong Backdoor Attack for Large Language Models Authors: Shuai Zhao, Leilei Gan, Zhongliang Guo, Xiaobao Wu, Luwei Xiao, Xiaoyu Xu, Cong-Duy Nguyen, Luu Anh Tuan | Published: 2024-09-26 | Updated: 2024-10-13 Backdoor AttackPrompt Injection 2024.09.26 2025.05.27 Literature Database
MoJE: Mixture of Jailbreak Experts, Naive Tabular Classifiers as Guard for Prompt Attacks Authors: Giandomenico Cornacchia, Giulio Zizzo, Kieran Fraser, Muhammad Zaid Hameed, Ambrish Rawat, Mark Purcell | Published: 2024-09-26 | Updated: 2024-10-04 Guardrail MethodContent ModerationPrompt Injection 2024.09.26 2025.05.27 Literature Database
PathSeeker: Exploring LLM Security Vulnerabilities with a Reinforcement Learning-Based Jailbreak Approach Authors: Zhihao Lin, Wei Ma, Mingyi Zhou, Yanjie Zhao, Haoyu Wang, Yang Liu, Jun Wang, Li Li | Published: 2024-09-21 | Updated: 2024-10-03 LLM Performance EvaluationPrompt Injection 2024.09.21 2025.05.27 Literature Database
LLM Honeypot: Leveraging Large Language Models as Advanced Interactive Honeypot Systems Authors: Hakan T. Otal, M. Abdullah Canbaz | Published: 2024-09-12 | Updated: 2024-09-15 LLM SecurityCybersecurityPrompt Injection 2024.09.12 2025.05.27 Literature Database
Exploring LLMs for Malware Detection: Review, Framework Design, and Countermeasure Approaches Authors: Jamal Al-Karaki, Muhammad Al-Zafar Khan, Marwan Omar | Published: 2024-09-11 LLM SecurityPrompt InjectionMalware Classification 2024.09.11 2025.05.27 Literature Database
CLNX: Bridging Code and Natural Language for C/C++ Vulnerability-Contributing Commits Identification Authors: Zeqing Qin, Yiwei Wu, Lansheng Han | Published: 2024-09-11 LLM Performance EvaluationProgram AnalysisPrompt Injection 2024.09.11 2025.05.27 Literature Database
DrLLM: Prompt-Enhanced Distributed Denial-of-Service Resistance Method with Large Language Models Authors: Zhenyu Yin, Shang Liu, Guangyuan Xu | Published: 2024-09-11 | Updated: 2025-01-13 DDoS Attack DetectionLLM Performance EvaluationPrompt Injection 2024.09.11 2025.05.27 Literature Database
AdaPPA: Adaptive Position Pre-Fill Jailbreak Attack Approach Targeting LLMs Authors: Lijia Lv, Weigang Zhang, Xuehai Tang, Jie Wen, Feng Liu, Jizhong Han, Songlin Hu | Published: 2024-09-11 LLM SecurityPrompt InjectionAttack Method 2024.09.11 2025.05.27 Literature Database
Exploring User Privacy Awareness on GitHub: An Empirical Study Authors: Costanza Alfieri, Juri Di Rocco, Paola Inverardi, Phuong T. Nguyen | Published: 2024-09-06 | Updated: 2024-09-10 Privacy ProtectionPrompt InjectionUser Activity Analysis 2024.09.06 2025.05.27 Literature Database