GuidedBench: Measuring and Mitigating the Evaluation Discrepancies of In-the-wild LLM Jailbreak Methods Authors: Ruixuan Huang, Xunguang Wang, Zongjie Li, Daoyuan Wu, Shuai Wang | Published: 2025-02-24 | Updated: 2025-07-09 Prompt Injection脱獄手法Evaluation Method 2025.02.24 2025.07.11 Literature Database
TombRaider: Entering the Vault of History to Jailbreak Large Language Models Authors: Junchen Ding, Jiahao Zhang, Yi Liu, Ziqi Ding, Gelei Deng, Yuekang Li | Published: 2025-01-27 | Updated: 2025-08-25 Prompt InjectionPrompt leaking脱獄手法 2025.01.27 2025.08.27 Literature Database