Large Language Model

Exploring and Mitigating Adversarial Manipulation of Voting-Based Leaderboards

Authors: Yangsibo Huang, Milad Nasr, Anastasios Angelopoulos, Nicholas Carlini, Wei-Lin Chiang, Christopher A. Choquette-Choo, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Ken Ziyu Liu, Ion Stoica, Florian Tramer, Chiyuan Zhang | Published: 2025-01-13
Cybersecurity
Large Language Model
Attack Evaluation

SATA: A Paradigm for LLM Jailbreak via Simple Assistive Task Linkage

Authors: Xiaoning Dong, Wenbo Hu, Wei Xu, Tianxing He | Published: 2024-12-19 | Updated: 2025-03-21
Prompt Injection
Large Language Model
Adversarial Learning

Towards Action Hijacking of Large Language Model-based Agent

Authors: Yuyang Zhang, Kangjie Chen, Jiaxin Gao, Ronghao Cui, Run Wang, Lina Wang, Tianwei Zhang | Published: 2024-12-14 | Updated: 2025-06-12
Performance Evaluation
Prompt leaking
Large Language Model

“Moralized” Multi-Step Jailbreak Prompts: Black-Box Testing of Guardrails in Large Language Models for Verbal Attacks

Authors: Libo Wang | Published: 2024-11-23 | Updated: 2025-03-20
Prompt Injection
Large Language Model

JailbreakLens: Interpreting Jailbreak Mechanism in the Lens of Representation and Circuit

Authors: Zeqing He, Zhibo Wang, Zhixuan Chu, Huiyu Xu, Wenhui Zhang, Qinglong Wang, Rui Zheng | Published: 2024-11-17 | Updated: 2025-04-24
Disabling Safety Mechanisms of LLM
Prompt Injection
Large Language Model

Attention Tracker: Detecting Prompt Injection Attacks in LLMs

Authors: Kuo-Han Hung, Ching-Yun Ko, Ambrish Rawat, I-Hsin Chung, Winston H. Hsu, Pin-Yu Chen | Published: 2024-11-01 | Updated: 2025-04-23
Indirect Prompt Injection
Large Language Model
Attention Mechanism

Code Vulnerability Repair with Large Language Model using Context-Aware Prompt Tuning

Authors: Arshiya Khan, Guannan Liu, Xing Gao | Published: 2024-09-27 | Updated: 2025-06-11
コード脆弱性修復
セキュリティコンテキスト統合
Large Language Model

Hide Your Malicious Goal Into Benign Narratives: Jailbreak Large Language Models through Carrier Articles

Authors: Zhilong Wang, Haizhou Wang, Nanqing Luo, Lan Zhang, Xiaoyan Sun, Yebo Cao, Peng Liu | Published: 2024-08-20 | Updated: 2025-02-07
Prompt Injection
Large Language Model
Attack Scenario Analysis

From Theft to Bomb-Making: The Ripple Effect of Unlearning in Defending Against Jailbreak Attacks

Authors: Zhexin Zhang, Junxiao Yang, Yida Lu, Pei Ke, Shiyao Cui, Chujie Zheng, Hongning Wang, Minlie Huang | Published: 2024-07-03 | Updated: 2025-05-20
Prompt Injection
Large Language Model
法執行回避

Knowledge-to-Jailbreak: Investigating Knowledge-driven Jailbreaking Attacks for Large Language Models

Authors: Shangqing Tu, Zhuoran Pan, Wenxuan Wang, Zhexin Zhang, Yuliang Sun, Jifan Yu, Hongning Wang, Lei Hou, Juanzi Li | Published: 2024-06-17 | Updated: 2025-06-09
Cooperative Effects with LLM
Prompt Injection
Large Language Model