A Cross-Language Investigation into Jailbreak Attacks in Large Language Models Authors: Jie Li, Yi Liu, Chongyang Liu, Ling Shi, Xiaoning Ren, Yaowen Zheng, Yang Liu, Yinxing Xue | Published: 2024-01-30 Character Role ActingPrompt InjectionMultilingual LLM Jailbreak 2024.01.30 2025.05.27 Literature Database
Detection and Defense Against Prominent Attacks on Preconditioned LLM-Integrated Virtual Assistants Authors: Chun Fai Chan, Daniel Wankit Yip, Aysan Esmradi | Published: 2024-01-02 LLM SecurityCharacter Role ActingSystem Prompt Generation 2024.01.02 2025.05.27 Literature Database
Dr. Jekyll and Mr. Hyde: Two Faces of LLMs Authors: Matteo Gioele Collu, Tom Janssen-Groesbeek, Stefanos Koffas, Mauro Conti, Stjepan Picek | Published: 2023-12-06 | Updated: 2024-10-07 Character Role ActingPrompt InjectionPoisoning 2023.12.06 2025.05.28 Literature Database
“Do Anything Now”: Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models Authors: Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, Yang Zhang | Published: 2023-08-07 | Updated: 2024-05-15 LLM SecurityCharacter Role ActingPrompt Injection 2023.08.07 2025.05.28 Literature Database