Prompt Injection

LLM Security Guard for Code

Authors: Arya Kavian, Mohammad Mehdi Pourhashem Kallehbasti, Sajjad Kazemi, Ehsan Firouzi, Mohammad Ghafari | Published: 2024-05-02 | Updated: 2024-05-03
LLM Security
Security Analysis
Prompt Injection

Unleashing the Power of LLM to Infer State Machine from the Protocol Implementation

Authors: Haiyang Wei, Ligeng Chen, Zhengjie Du, Yuhan Wu, Haohui Huang, Yue Liu, Guang Cheng, Fengyuan Xu, Linzhang Wang, Bing Mao | Published: 2024-05-01 | Updated: 2025-03-27
LLM Performance Evaluation
Prompt Injection
State Transition Model

TuBA: Cross-Lingual Transferability of Backdoor Attacks in LLMs with Instruction Tuning

Authors: Xuanli He, Jun Wang, Qiongkai Xu, Pasquale Minervini, Pontus Stenetorp, Benjamin I. P. Rubinstein, Trevor Cohn | Published: 2024-04-30 | Updated: 2025-03-17
Content Moderation
Backdoor Attack
Prompt Injection

Evaluating and Mitigating Linguistic Discrimination in Large Language Models

Authors: Guoliang Dong, Haoyu Wang, Jun Sun, Xinyu Wang | Published: 2024-04-29 | Updated: 2024-05-10
LLM Performance Evaluation
Bias
Prompt Injection

Attacks on Third-Party APIs of Large Language Models

Authors: Wanru Zhao, Vidit Khazanchi, Haodi Xing, Xuanli He, Qiongkai Xu, Nicholas Donald Lane | Published: 2024-04-24
LLM Security
Prompt Injection
Attack Method

Act as a Honeytoken Generator! An Investigation into Honeytoken Generation with Large Language Models

Authors: Daniel Reti, Norman Becker, Tillmann Angeli, Anasuya Chattopadhyay, Daniel Schneider, Sebastian Vollmer, Hans D. Schotten | Published: 2024-04-24
LLM Performance Evaluation
Honeypot Technology
Prompt Injection

zkLLM: Zero Knowledge Proofs for Large Language Models

Authors: Haochen Sun, Jason Li, Hongyang Zhang | Published: 2024-04-24
Prompt Injection
Computational Efficiency
Watermark Robustness

Protecting Your LLMs with Information Bottleneck

Authors: Zichuan Liu, Zefan Wang, Linjie Xu, Jinyu Wang, Lei Song, Tianchun Wang, Chunlin Chen, Wei Cheng, Jiang Bian | Published: 2024-04-22 | Updated: 2024-10-10
LLM Security
Prompt Injection
Compliance with Ethical Guidelines

Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs

Authors: Javier Rando, Francesco Croce, Kryštof Mitka, Stepan Shabalin, Maksym Andriushchenko, Nicolas Flammarion, Florian Tramèr | Published: 2024-04-22 | Updated: 2024-06-06
LLM Security
Backdoor Attack
Prompt Injection

AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs

Authors: Anselm Paulus, Arman Zharmagambetov, Chuan Guo, Brandon Amos, Yuandong Tian | Published: 2024-04-21
LLM Security
Prompt Injection
Prompt Engineering