脱獄攻撃手法

SafePTR: Token-Level Jailbreak Defense in Multimodal LLMs via Prune-then-Restore Mechanism

Authors: Beitao Chen, Xinyu Lyu, Lianli Gao, Jingkuan Song, Heng Tao Shen | Published: 2025-07-02
Prompt Injection
脱獄攻撃手法
Transparency and Verification

MetaCipher: A Time-Persistent and Universal Multi-Agent Framework for Cipher-Based Jailbreak Attacks for LLMs

Authors: Boyuan Chen, Minghao Shao, Abdul Basit, Siddharth Garg, Muhammad Shafique | Published: 2025-06-27 | Updated: 2025-08-13
Framework
Large Language Model
脱獄攻撃手法

SoK: Evaluating Jailbreak Guardrails for Large Language Models

Authors: Xunguang Wang, Zhenlan Ji, Wenxuan Wang, Zongjie Li, Daoyuan Wu, Shuai Wang | Published: 2025-06-12
Prompt Injection
Trade-Off Between Safety And Usability
脱獄攻撃手法