Prompt Injection

LLM-Assisted Web Measurements

Authors: Simone Bozzolan, Stefano Calzavara, Lorenzo Cazzaro | Published: 2025-10-09
Bias Detection in AI Output
Application Classification Method
Prompt Injection

Fewer Weights, More Problems: A Practical Attack on LLM Pruning

Authors: Kazuki Egashira, Robin Staab, Thibaud Gloaguen, Mark Vero, Martin Vechev | Published: 2025-10-09
Security Analysis Method
Prompt Injection
Defense Effectiveness Analysis

MetaDefense: Defending Finetuning-based Jailbreak Attack Before and During Generation

Authors: Weisen Jiang, Sinno Jialin Pan | Published: 2025-10-09
Prompt Injection
Robustness
Defense Mechanism

Proactive defense against LLM Jailbreak

Authors: Weiliang Zhao, Jinjun Peng, Daniel Ben-Levi, Zhou Yu, Junfeng Yang | Published: 2025-10-06
Disabling Safety Mechanisms of LLM
Prompt Injection
防御手法の統合

P2P: A Poison-to-Poison Remedy for Reliable Backdoor Defense in LLMs

Authors: Shuai Zhao, Xinyi Wu, Shiqian Zhao, Xiaobao Wu, Zhongliang Guo, Yanhao Jia, Anh Tuan Luu | Published: 2025-10-06
Prompt Injection
Prompt validation
防御手法の統合

NEXUS: Network Exploration for eXploiting Unsafe Sequences in Multi-Turn LLM Jailbreaks

Authors: Javad Rafiei Asl, Sidhant Narula, Mohammad Ghasemigol, Eduardo Blanco, Daniel Takabi | Published: 2025-10-03 | Updated: 2025-10-21
Prompt Injection
Large Language Model
脱獄手法

Untargeted Jailbreak Attack

Authors: Xinzhe Huang, Wenjing Hu, Tianhang Zheng, Kedong Xiu, Xiaojun Jia, Di Wang, Zhan Qin, Kui Ren | Published: 2025-10-03 | Updated: 2025-10-28
Prompt Injection
Prompt leaking
Effectiveness Analysis of Defense Methods

FalseCrashReducer: Mitigating False Positive Crashes in OSS-Fuzz-Gen Using Agentic AI

Authors: Paschal C. Amusuo, Dongge Liu, Ricardo Andres Calvo Mendez, Jonathan Metzman, Oliver Chang, James C. Davis | Published: 2025-10-02
Program Analysis
Prompt Injection
誤検知管理

Bypassing Prompt Guards in Production with Controlled-Release Prompting

Authors: Jaiden Fairoze, Sanjam Garg, Keewoo Lee, Mingyuan Wang | Published: 2025-10-02
Prompt Injection
Large Language Model
Structural Attack

Fingerprinting LLMs via Prompt Injection

Authors: Yuepeng Hu, Zhengyuan Jiang, Mengyuan Li, Osama Ahmed, Zhicong Huang, Cheng Hong, Neil Gong | Published: 2025-09-29 | Updated: 2025-10-01
Indirect Prompt Injection
Token Identification Method
Prompt Injection