Security Attacks on LLM-based Code Completion Tools Authors: Wen Cheng, Ke Sun, Xinyu Zhang, Wei Wang | Published: 2024-08-20 | Updated: 2025-01-02 LLM SecurityPrompt InjectionAttack Method 2024.08.20 2025.05.27 Literature Database
Transferring Backdoors between Large Language Models by Knowledge Distillation Authors: Pengzhou Cheng, Zongru Wu, Tianjie Ju, Wei Du, Zhuosheng Zhang Gongshen Liu | Published: 2024-08-19 LLM SecurityBackdoor AttackPoisoning 2024.08.19 2025.05.27 Literature Database
Antidote: Post-fine-tuning Safety Alignment for Large Language Models against Harmful Fine-tuning Authors: Tiansheng Huang, Gautam Bhattacharya, Pratik Joshi, Josh Kimball, Ling Liu | Published: 2024-08-18 | Updated: 2024-09-03 LLM SecurityPrompt InjectionSafety Alignment 2024.08.18 2025.05.27 Literature Database
BaThe: Defense against the Jailbreak Attack in Multimodal Large Language Models by Treating Harmful Instruction as Backdoor Trigger Authors: Yulin Chen, Haoran Li, Yirui Zhang, Zihao Zheng, Yangqiu Song, Bryan Hooi | Published: 2024-08-17 | Updated: 2025-04-22 AI ComplianceLLM SecurityContent Moderation 2024.08.17 2025.05.27 Literature Database
MIA-Tuner: Adapting Large Language Models as Pre-training Text Detector Authors: Wenjie Fu, Huandong Wang, Chen Gao, Guanghua Liu, Yong Li, Tao Jiang | Published: 2024-08-16 LLM SecurityPrompt InjectionMembership Inference 2024.08.16 2025.05.27 Literature Database
DePrompt: Desensitization and Evaluation of Personal Identifiable Information in Large Language Model Prompts Authors: Xiongtao Sun, Gan Liu, Zhipeng He, Hui Li, Xiaoguang Li | Published: 2024-08-16 LLM SecurityPrivacy Protection MethodPrompt Injection 2024.08.16 2025.05.27 Literature Database
Prefix Guidance: A Steering Wheel for Large Language Models to Defend Against Jailbreak Attacks Authors: Jiawei Zhao, Kejiang Chen, Xiaojian Yuan, Weiming Zhang | Published: 2024-08-15 | Updated: 2024-08-22 LLM SecurityPrompt InjectionDefense Method 2024.08.15 2025.05.27 Literature Database
Casper: Prompt Sanitization for Protecting User Privacy in Web-Based Large Language Models Authors: Chun Jie Chong, Chenxi Hou, Zhihao Yao, Seyed Mohammadjavad Seyed Talebi | Published: 2024-08-13 LLM SecurityPrivacy ProtectionPrompt Injection 2024.08.13 2025.05.27 Literature Database
Kov: Transferable and Naturalistic Black-Box LLM Attacks using Markov Decision Processes and Tree Search Authors: Robert J. Moss | Published: 2024-08-11 LLM SecurityPrompt InjectionCompliance with Ethical Guidelines 2024.08.11 2025.05.27 Literature Database
Towards Automatic Hands-on-Keyboard Attack Detection Using LLMs in EDR Solutions Authors: Amit Portnoy, Ehud Azikri, Shay Kels | Published: 2024-08-04 LLM SecurityEndpoint DetectionData Collection 2024.08.04 2025.05.27 Literature Database