Large Language Model

PromptLocate: Localizing Prompt Injection Attacks

Authors: Yuqi Jia, Yupei Liu, Zedian Shao, Jinyuan Jia, Neil Gong | Published: 2025-10-14
Prompt validation
Large Language Model
evaluation metrics

PACEbench: A Framework for Evaluating Practical AI Cyber-Exploitation Capabilities

Authors: Zicheng Liu, Lige Huang, Jie Zhang, Dongrui Liu, Yuan Tian, Jing Shao | Published: 2025-10-13
Security Analysis Method
Large Language Model
Defense Mechanism

Machine Unlearning Meets Adversarial Robustness via Constrained Interventions on LLMs

Authors: Fatmazohra Rezkellah, Ramzi Dakhmouche | Published: 2025-10-03 | Updated: 2025-10-15
Identification of AI Output
Robustness
Large Language Model

NEXUS: Network Exploration for eXploiting Unsafe Sequences in Multi-Turn LLM Jailbreaks

Authors: Javad Rafiei Asl, Sidhant Narula, Mohammad Ghasemigol, Eduardo Blanco, Daniel Takabi | Published: 2025-10-03 | Updated: 2025-10-21
Prompt Injection
Large Language Model
脱獄手法

Bypassing Prompt Guards in Production with Controlled-Release Prompting

Authors: Jaiden Fairoze, Sanjam Garg, Keewoo Lee, Mingyuan Wang | Published: 2025-10-02
Prompt Injection
Large Language Model
Structural Attack

EvoMail: Self-Evolving Cognitive Agents for Adaptive Spam and Phishing Email Defense

Authors: Wei Huang, De-Tian Chu, Lin-Yuan Bai, Wei Kang, Hai-Tao Zhang, Bo Li, Zhi-Mo Han, Jing Ge, Hai-Feng Lin | Published: 2025-09-25
Phishing Attack
Large Language Model
Self-Evolving Framework

LLM-based Vulnerability Discovery through the Lens of Code Metrics

Authors: Felix Weissberg, Lukas Pirch, Erik Imgrund, Jonas Möller, Thorsten Eisenhofer, Konrad Rieck | Published: 2025-09-23
コードメトリクス評価
Prompt Injection
Large Language Model

LLM Jailbreak Detection for (Almost) Free!

Authors: Guorui Chen, Yifan Xia, Xiaojun Jia, Zhijiang Li, Philip Torr, Jindong Gu | Published: 2025-09-18
Large Language Model
Evaluation Method
Watermarking Technology

Yet Another Watermark for Large Language Models

Authors: Siyuan Bao, Ying Shi, Zhiguang Yang, Hanzhou Wu, Xinpeng Zhang | Published: 2025-09-16
Prompt leaking
Large Language Model
Watermarking Technology

NeuroStrike: Neuron-Level Attacks on Aligned LLMs

Authors: Lichao Wu, Sasha Behrouzi, Mohamadreza Rostami, Maximilian Thang, Stjepan Picek, Ahmad-Reza Sadeghi | Published: 2025-09-15
Prompt Injection
Large Language Model
安全性メカニズムの分析