Multilingual LLM Jailbreak

A Cross-Language Investigation into Jailbreak Attacks in Large Language Models

Authors: Jie Li, Yi Liu, Chongyang Liu, Ling Shi, Xiaoning Ren, Yaowen Zheng, Yang Liu, Yinxing Xue | Published: 2024-01-30
Character Role Acting
Prompt Injection
Multilingual LLM Jailbreak

Bergeron: Combating Adversarial Attacks through a Conscience-Based Alignment Framework

Authors: Matthew Pisano, Peter Ly, Abraham Sanders, Bingsheng Yao, Dakuo Wang, Tomek Strzalkowski, Mei Si | Published: 2023-11-16 | Updated: 2024-08-18
Prompt Injection
Multilingual LLM Jailbreak
Adversarial attack