Protecting Your LLMs with Information Bottleneck Authors: Zichuan Liu, Zefan Wang, Linjie Xu, Jinyu Wang, Lei Song, Tianchun Wang, Chunlin Chen, Wei Cheng, Jiang Bian | Published: 2024-04-22 | Updated: 2024-10-10 LLM SecurityPrompt InjectionCompliance with Ethical Guidelines 2024.04.22 2025.05.27 Literature Database
Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs Authors: Javier Rando, Francesco Croce, Kryštof Mitka, Stepan Shabalin, Maksym Andriushchenko, Nicolas Flammarion, Florian Tramèr | Published: 2024-04-22 | Updated: 2024-06-06 LLM SecurityBackdoor AttackPrompt Injection 2024.04.22 2025.05.27 Literature Database
AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs Authors: Anselm Paulus, Arman Zharmagambetov, Chuan Guo, Brandon Amos, Yuandong Tian | Published: 2024-04-21 LLM SecurityPrompt InjectionPrompt Engineering 2024.04.21 2025.05.27 Literature Database
CyberSecEval 2: A Wide-Ranging Cybersecurity Evaluation Suite for Large Language Models Authors: Manish Bhatt, Sahana Chennabasappa, Yue Li, Cyrus Nikolaidis, Daniel Song, Shengye Wan, Faizan Ahmad, Cornelius Aschermann, Yaohui Chen, Dhaval Kapil, David Molnar, Spencer Whitman, Joshua Saxe | Published: 2024-04-19 LLM SecurityCybersecurityPrompt Injection 2024.04.19 2025.05.27 Literature Database
JailbreakLens: Visual Analysis of Jailbreak Attacks Against Large Language Models Authors: Yingchaojie Feng, Zhizhang Chen, Zhining Kang, Sijia Wang, Minfeng Zhu, Wei Zhang, Wei Chen | Published: 2024-04-12 LLM Performance EvaluationPrompt InjectionEvaluation Method 2024.04.12 2025.05.27 Literature Database
Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward Authors: Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma | Published: 2024-04-12 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.04.12 2025.05.27 Literature Database
Subtoxic Questions: Dive Into Attitude Change of LLM’s Response in Jailbreak Attempts Authors: Tianyu Zhang, Zixuan Zhao, Jiaqi Huang, Jingyu Hua, Sheng Zhong | Published: 2024-04-12 LLM SecurityPrompt InjectionPrompt Engineering 2024.04.12 2025.05.27 Literature Database
Sandwich attack: Multi-language Mixture Adaptive Attack on LLMs Authors: Bibek Upadhayay, Vahid Behzadan | Published: 2024-04-09 LLM SecurityPrompt InjectionAttack Method 2024.04.09 2025.05.27 Literature Database
Rethinking How to Evaluate Language Model Jailbreak Authors: Hongyu Cai, Arjun Arunasalam, Leo Y. Lin, Antonio Bianchi, Z. Berkay Celik | Published: 2024-04-09 | Updated: 2024-05-07 Prompt InjectionClassification of Malicious ActorsEvaluation Method 2024.04.09 2025.05.27 Literature Database
Unbridled Icarus: A Survey of the Potential Perils of Image Inputs in Multimodal Large Language Model Security Authors: Yihe Fan, Yuxin Cao, Ziyu Zhao, Ziyao Liu, Shaofeng Li | Published: 2024-04-08 | Updated: 2024-08-11 LLM SecurityPrompt InjectionThreat modeling 2024.04.08 2025.05.27 Literature Database