LLM Security

Security Attacks on LLM-based Code Completion Tools

Authors: Wen Cheng, Ke Sun, Xinyu Zhang, Wei Wang | Published: 2024-08-20 | Updated: 2025-01-02
LLM Security
Prompt Injection
Attack Method

Transferring Backdoors between Large Language Models by Knowledge Distillation

Authors: Pengzhou Cheng, Zongru Wu, Tianjie Ju, Wei Du, Zhuosheng Zhang Gongshen Liu | Published: 2024-08-19
LLM Security
Backdoor Attack
Poisoning

Antidote: Post-fine-tuning Safety Alignment for Large Language Models against Harmful Fine-tuning

Authors: Tiansheng Huang, Gautam Bhattacharya, Pratik Joshi, Josh Kimball, Ling Liu | Published: 2024-08-18 | Updated: 2024-09-03
LLM Security
Prompt Injection
Safety Alignment

BaThe: Defense against the Jailbreak Attack in Multimodal Large Language Models by Treating Harmful Instruction as Backdoor Trigger

Authors: Yulin Chen, Haoran Li, Yirui Zhang, Zihao Zheng, Yangqiu Song, Bryan Hooi | Published: 2024-08-17 | Updated: 2025-04-22
AI Compliance
LLM Security
Content Moderation

MIA-Tuner: Adapting Large Language Models as Pre-training Text Detector

Authors: Wenjie Fu, Huandong Wang, Chen Gao, Guanghua Liu, Yong Li, Tao Jiang | Published: 2024-08-16
LLM Security
Prompt Injection
Membership Inference

DePrompt: Desensitization and Evaluation of Personal Identifiable Information in Large Language Model Prompts

Authors: Xiongtao Sun, Gan Liu, Zhipeng He, Hui Li, Xiaoguang Li | Published: 2024-08-16
LLM Security
Privacy Protection Method
Prompt Injection

Prefix Guidance: A Steering Wheel for Large Language Models to Defend Against Jailbreak Attacks

Authors: Jiawei Zhao, Kejiang Chen, Xiaojian Yuan, Weiming Zhang | Published: 2024-08-15 | Updated: 2024-08-22
LLM Security
Prompt Injection
Defense Method

Casper: Prompt Sanitization for Protecting User Privacy in Web-Based Large Language Models

Authors: Chun Jie Chong, Chenxi Hou, Zhihao Yao, Seyed Mohammadjavad Seyed Talebi | Published: 2024-08-13
LLM Security
Privacy Protection
Prompt Injection

Kov: Transferable and Naturalistic Black-Box LLM Attacks using Markov Decision Processes and Tree Search

Authors: Robert J. Moss | Published: 2024-08-11
LLM Security
Prompt Injection
Compliance with Ethical Guidelines

Towards Automatic Hands-on-Keyboard Attack Detection Using LLMs in EDR Solutions

Authors: Amit Portnoy, Ehud Azikri, Shay Kels | Published: 2024-08-04
LLM Security
Endpoint Detection
Data Collection