Prompt Injection

MetaDefense: Defending Finetuning-based Jailbreak Attack Before and During Generation

Authors: Weisen Jiang, Sinno Jialin Pan | Published: 2025-10-09
Prompt Injection
Robustness
Defense Mechanism

Proactive defense against LLM Jailbreak

Authors: Weiliang Zhao, Jinjun Peng, Daniel Ben-Levi, Zhou Yu, Junfeng Yang | Published: 2025-10-06
Disabling Safety Mechanisms of LLM
Prompt Injection
防御手法の統合

P2P: A Poison-to-Poison Remedy for Reliable Backdoor Defense in LLMs

Authors: Shuai Zhao, Xinyi Wu, Shiqian Zhao, Xiaobao Wu, Zhongliang Guo, Yanhao Jia, Anh Tuan Luu | Published: 2025-10-06
Prompt Injection
Prompt validation
防御手法の統合

NEXUS: Network Exploration for eXploiting Unsafe Sequences in Multi-Turn LLM Jailbreaks

Authors: Javad Rafiei Asl, Sidhant Narula, Mohammad Ghasemigol, Eduardo Blanco, Daniel Takabi | Published: 2025-10-03 | Updated: 2025-10-21
Prompt Injection
Large Language Model
脱獄手法

FalseCrashReducer: Mitigating False Positive Crashes in OSS-Fuzz-Gen Using Agentic AI

Authors: Paschal C. Amusuo, Dongge Liu, Ricardo Andres Calvo Mendez, Jonathan Metzman, Oliver Chang, James C. Davis | Published: 2025-10-02
Program Analysis
Prompt Injection
誤検知管理

Bypassing Prompt Guards in Production with Controlled-Release Prompting

Authors: Jaiden Fairoze, Sanjam Garg, Keewoo Lee, Mingyuan Wang | Published: 2025-10-02
Prompt Injection
Large Language Model
Structural Attack

Fingerprinting LLMs via Prompt Injection

Authors: Yuepeng Hu, Zhengyuan Jiang, Mengyuan Li, Osama Ahmed, Zhicong Huang, Cheng Hong, Neil Gong | Published: 2025-09-29 | Updated: 2025-10-01
Indirect Prompt Injection
Token Identification Method
Prompt Injection

MaskSQL: Safeguarding Privacy for LLM-Based Text-to-SQL via Abstraction

Authors: Sepideh Abedini, Shubhankar Mohapatra, D. B. Emerson, Masoumeh Shafieinejad, Jesse C. Cresswell, Xi He | Published: 2025-09-27 | Updated: 2025-09-30
SQLクエリ生成
Prompt Injection
Prompt leaking

RLCracker: Exposing the Vulnerability of LLM Watermarks with Adaptive RL Attacks

Authors: Hanbo Huang, Yiran Zhang, Hao Zheng, Xuan Gong, Yihan Li, Lin Liu, Shiyu Liang | Published: 2025-09-25
Disabling Safety Mechanisms of LLM
Prompt Injection
Watermark Design

Can Federated Learning Safeguard Private Data in LLM Training? Vulnerabilities, Attacks, and Defense Evaluation

Authors: Wenkai Guo, Xuefeng Liu, Haolin Wang, Jianwei Niu, Shaojie Tang, Jing Yuan | Published: 2025-09-25
Privacy Protection Method
Prompt Injection
Poisoning