JailbreakLens: Interpreting Jailbreak Mechanism in the Lens of Representation and Circuit Authors: Zeqing He, Zhibo Wang, Zhixuan Chu, Huiyu Xu, Wenhui Zhang, Qinglong Wang, Rui Zheng | Published: 2024-11-17 | Updated: 2025-04-24 Disabling Safety Mechanisms of LLMPrompt InjectionLarge Language Model 2024.11.17 2025.05.27 Literature Database
Attention Tracker: Detecting Prompt Injection Attacks in LLMs Authors: Kuo-Han Hung, Ching-Yun Ko, Ambrish Rawat, I-Hsin Chung, Winston H. Hsu, Pin-Yu Chen | Published: 2024-11-01 | Updated: 2025-04-23 Indirect Prompt InjectionLarge Language ModelAttention Mechanism 2024.11.01 2025.05.27 Literature Database
Code Vulnerability Repair with Large Language Model using Context-Aware Prompt Tuning Authors: Arshiya Khan, Guannan Liu, Xing Gao | Published: 2024-09-27 | Updated: 2025-06-11 コード脆弱性修復セキュリティコンテキスト統合Large Language Model 2024.09.27 2025.06.13 Literature Database
Hide Your Malicious Goal Into Benign Narratives: Jailbreak Large Language Models through Carrier Articles Authors: Zhilong Wang, Haizhou Wang, Nanqing Luo, Lan Zhang, Xiaoyan Sun, Yebo Cao, Peng Liu | Published: 2024-08-20 | Updated: 2025-02-07 Prompt InjectionLarge Language ModelAttack Scenario Analysis 2024.08.20 2025.05.27 Literature Database
From Theft to Bomb-Making: The Ripple Effect of Unlearning in Defending Against Jailbreak Attacks Authors: Zhexin Zhang, Junxiao Yang, Yida Lu, Pei Ke, Shiyao Cui, Chujie Zheng, Hongning Wang, Minlie Huang | Published: 2024-07-03 | Updated: 2025-05-20 Prompt InjectionLarge Language Model法執行回避 2024.07.03 2025.05.28 Literature Database
Knowledge-to-Jailbreak: Investigating Knowledge-driven Jailbreaking Attacks for Large Language Models Authors: Shangqing Tu, Zhuoran Pan, Wenxuan Wang, Zhexin Zhang, Yuliang Sun, Jifan Yu, Hongning Wang, Lei Hou, Juanzi Li | Published: 2024-06-17 | Updated: 2025-06-09 Cooperative Effects with LLMPrompt InjectionLarge Language Model 2024.06.17 2025.06.11 Literature Database
Cross-Modal Safety Alignment: Is textual unlearning all you need? Authors: Trishna Chakraborty, Erfan Shayegani, Zikui Cai, Nael Abu-Ghazaleh, M. Salman Asif, Yue Dong, Amit K. Roy-Chowdhury, Chengyu Song | Published: 2024-05-27 | Updated: 2025-10-14 Privacy Enhancing TechnologyCalculation of Output HarmfulnessLarge Language Model 2024.05.27 2025.10.16 Literature Database
S-Eval: Towards Automated and Comprehensive Safety Evaluation for Large Language Models Authors: Xiaohan Yuan, Jinfeng Li, Dongxia Wang, Yuefeng Chen, Xiaofeng Mao, Longtao Huang, Jialuo Chen, Hui Xue, Xiaoxia Liu, Wenhai Wang, Kui Ren, Jingyi Wang | Published: 2024-05-23 | Updated: 2025-04-07 Risk Analysis MethodLarge Language ModelSafety Alignment 2024.05.23 2025.05.27 Literature Database
Watermark Stealing in Large Language Models Authors: Nikola Jovanović, Robin Staab, Martin Vechev | Published: 2024-02-29 | Updated: 2024-06-24 Model Extraction AttackLarge Language ModelTaxonomy of Attacks 2024.02.29 2025.05.27 Literature Database
Measuring Implicit Bias in Explicitly Unbiased Large Language Models Authors: Xuechunzi Bai, Angelina Wang, Ilia Sucholutsky, Thomas L. Griffiths | Published: 2024-02-06 | Updated: 2024-05-23 Bias Detection in AI OutputAlgorithm FairnessLarge Language Model 2024.02.06 2025.05.27 Literature Database