Prompt Injection

EEG-Defender: Defending against Jailbreak through Early Exit Generation of Large Language Models

Authors: Chongwen Zhao, Zhihao Dou, Kaizhu Huang | Published: 2024-08-21
LLM Security
Prompt Injection
Defense Method

Hide Your Malicious Goal Into Benign Narratives: Jailbreak Large Language Models through Carrier Articles

Authors: Zhilong Wang, Haizhou Wang, Nanqing Luo, Lan Zhang, Xiaoyan Sun, Yebo Cao, Peng Liu | Published: 2024-08-20 | Updated: 2025-02-07
Prompt Injection
Large Language Model
Attack Scenario Analysis

Security Attacks on LLM-based Code Completion Tools

Authors: Wen Cheng, Ke Sun, Xinyu Zhang, Wei Wang | Published: 2024-08-20 | Updated: 2025-01-02
LLM Security
Prompt Injection
Attack Method

LeCov: Multi-level Testing Criteria for Large Language Models

Authors: Xuan Xie, Jiayang Song, Yuheng Huang, Da Song, Fuyuan Zhang, Felix Juefei-Xu, Lei Ma | Published: 2024-08-20
LLM Performance Evaluation
Test Prioritization
Prompt Injection

Antidote: Post-fine-tuning Safety Alignment for Large Language Models against Harmful Fine-tuning

Authors: Tiansheng Huang, Gautam Bhattacharya, Pratik Joshi, Josh Kimball, Ling Liu | Published: 2024-08-18 | Updated: 2024-09-03
LLM Security
Prompt Injection
Safety Alignment

MIA-Tuner: Adapting Large Language Models as Pre-training Text Detector

Authors: Wenjie Fu, Huandong Wang, Chen Gao, Guanghua Liu, Yong Li, Tao Jiang | Published: 2024-08-16
LLM Security
Prompt Injection
Membership Inference

PatUntrack: Automated Generating Patch Examples for Issue Reports without Tracked Insecure Code

Authors: Ziyou Jiang, Lin Shi, Guowei Yang, Qing Wang | Published: 2024-08-16
Code Generation
Prompt Injection
Vulnerability Management

DePrompt: Desensitization and Evaluation of Personal Identifiable Information in Large Language Model Prompts

Authors: Xiongtao Sun, Gan Liu, Zhipeng He, Hui Li, Xiaoguang Li | Published: 2024-08-16
LLM Security
Privacy Protection Method
Prompt Injection

Prefix Guidance: A Steering Wheel for Large Language Models to Defend Against Jailbreak Attacks

Authors: Jiawei Zhao, Kejiang Chen, Xiaojian Yuan, Weiming Zhang | Published: 2024-08-15 | Updated: 2024-08-22
LLM Security
Prompt Injection
Defense Method

LLM-Enhanced Static Analysis for Precise Identification of Vulnerable OSS Versions

Authors: Yiran Cheng, Lwin Khin Shar, Ting Zhang, Shouguo Yang, Chaopeng Dong, David Lo, Shichao Lv, Zhiqiang Shi, Limin Sun | Published: 2024-08-14
Code Change Analysis
Prompt Injection
Vulnerability Management