SoK: Evaluating Jailbreak Guardrails for Large Language Models Authors: Xunguang Wang, Zhenlan Ji, Wenxuan Wang, Zongjie Li, Daoyuan Wu, Shuai Wang | Published: 2025-06-12 Prompt InjectionTrade-Off Between Safety And Usability脱獄攻撃手法 2025.06.12 2025.06.14 Literature Database
$\texttt{SAGE}$: A Generic Framework for LLM Safety Evaluation Authors: Madhur Jindal, Hari Shrawgi, Parag Agrawal, Sandipan Dandapat | Published: 2025-04-28 User Identification SystemLarge Language ModelTrade-Off Between Safety And Usability 2025.04.28 2025.05.27 Literature Database
Improving LLM Safety Alignment with Dual-Objective Optimization Authors: Xuandong Zhao, Will Cai, Tianneng Shi, David Huang, Licong Lin, Song Mei, Dawn Song | Published: 2025-03-05 | Updated: 2025-06-12 Prompt InjectionRobustness Improvement MethodTrade-Off Between Safety And Usability 2025.03.05 2025.06.14 Literature Database