A Cross-Language Investigation into Jailbreak Attacks in Large Language Models Authors: Jie Li, Yi Liu, Chongyang Liu, Ling Shi, Xiaoning Ren, Yaowen Zheng, Yang Liu, Yinxing Xue | Published: 2024-01-30 Character Role ActingPrompt InjectionMultilingual LLM Jailbreak 2024.01.30 2025.05.27 Literature Database
Bergeron: Combating Adversarial Attacks through a Conscience-Based Alignment Framework Authors: Matthew Pisano, Peter Ly, Abraham Sanders, Bingsheng Yao, Dakuo Wang, Tomek Strzalkowski, Mei Si | Published: 2023-11-16 | Updated: 2024-08-18 Prompt InjectionMultilingual LLM JailbreakAdversarial attack 2023.11.16 2025.05.28 Literature Database