LLM Security

Attacks on Third-Party APIs of Large Language Models

Authors: Wanru Zhao, Vidit Khazanchi, Haodi Xing, Xuanli He, Qiongkai Xu, Nicholas Donald Lane | Published: 2024-04-24
LLM Security
Prompt Injection
Attack Method

Watch Out for Your Guidance on Generation! Exploring Conditional Backdoor Attacks against Large Language Models

Authors: Jiaming He, Wenbo Jiang, Guanyu Hou, Wenshu Fan, Rui Zhang, Hongwei Li | Published: 2024-04-23 | Updated: 2025-01-08
LLM Security
Backdoor Attack
Poisoning

Protecting Your LLMs with Information Bottleneck

Authors: Zichuan Liu, Zefan Wang, Linjie Xu, Jinyu Wang, Lei Song, Tianchun Wang, Chunlin Chen, Wei Cheng, Jiang Bian | Published: 2024-04-22 | Updated: 2024-10-10
LLM Security
Prompt Injection
Compliance with Ethical Guidelines

Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs

Authors: Javier Rando, Francesco Croce, Kryštof Mitka, Stepan Shabalin, Maksym Andriushchenko, Nicolas Flammarion, Florian Tramèr | Published: 2024-04-22 | Updated: 2024-06-06
LLM Security
Backdoor Attack
Prompt Injection

AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs

Authors: Anselm Paulus, Arman Zharmagambetov, Chuan Guo, Brandon Amos, Yuandong Tian | Published: 2024-04-21
LLM Security
Prompt Injection
Prompt Engineering

CyberSecEval 2: A Wide-Ranging Cybersecurity Evaluation Suite for Large Language Models

Authors: Manish Bhatt, Sahana Chennabasappa, Yue Li, Cyrus Nikolaidis, Daniel Song, Shengye Wan, Faizan Ahmad, Cornelius Aschermann, Yaohui Chen, Dhaval Kapil, David Molnar, Spencer Whitman, Joshua Saxe | Published: 2024-04-19
LLM Security
Cybersecurity
Prompt Injection

LLMs for Cyber Security: New Opportunities

Authors: Dinil Mon Divakaran, Sai Teja Peddinti | Published: 2024-04-17
LLM Security
Cybersecurity

Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward

Authors: Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma | Published: 2024-04-12
LLM Security
LLM Performance Evaluation
Prompt Injection

Subtoxic Questions: Dive Into Attitude Change of LLM’s Response in Jailbreak Attempts

Authors: Tianyu Zhang, Zixuan Zhao, Jiaqi Huang, Jingyu Hua, Sheng Zhong | Published: 2024-04-12
LLM Security
Prompt Injection
Prompt Engineering

Sandwich attack: Multi-language Mixture Adaptive Attack on LLMs

Authors: Bibek Upadhayay, Vahid Behzadan | Published: 2024-04-09
LLM Security
Prompt Injection
Attack Method