Attention is All You Need to Defend Against Indirect Prompt Injection Attacks in LLMs Authors: Yinan Zhong, Qianhao Miao, Yanjiao Chen, Jiangyi Deng, Yushi Cheng, Wenyuan Xu | Published: 2025-12-09 Indirect Prompt InjectionPrompt validationLarge Language Model 2025.12.09 2025.12.11 Literature Database
LLMs can hide text in other text of the same length Authors: Antonio Norelli, Michael Bronstein | Published: 2025-10-22 | Updated: 2025-10-27 Privacy ProtectionPrompt validation教育目的の情報提供 2025.10.22 2025.10.29 Literature Database
PromptLocate: Localizing Prompt Injection Attacks Authors: Yuqi Jia, Yupei Liu, Zedian Shao, Jinyuan Jia, Neil Gong | Published: 2025-10-14 Prompt validationLarge Language Modelevaluation metrics 2025.10.14 2025.10.16 Literature Database
P2P: A Poison-to-Poison Remedy for Reliable Backdoor Defense in LLMs Authors: Shuai Zhao, Xinyi Wu, Shiqian Zhao, Xiaobao Wu, Zhongliang Guo, Yanhao Jia, Anh Tuan Luu | Published: 2025-10-06 Prompt InjectionPrompt validation防御手法の統合 2025.10.06 2025.10.08 Literature Database
Detection of security smells in IaC scripts through semantics-aware code and language processing Authors: Aicha War, Adnan A. Rawass, Abdoul K. Kabore, Jordan Samhi, Jacques Klein, Tegawende F. Bissyande | Published: 2025-09-23 コード表現技術Security AnalysisPrompt validation 2025.09.23 2025.09.25 Literature Database
EPT Benchmark: Evaluation of Persian Trustworthiness in Large Language Models Authors: Mohammad Reza Mirbagheri, Mohammad Mahdi Mirkamali, Zahra Motoshaker Arani, Ali Javeri, Amir Mahdi Sadeghzadeh, Rasool Jalili | Published: 2025-09-08 Fairness LearningPrompt validation安全性 2025.09.08 2025.09.10 Literature Database
PromptCOS: Towards System Prompt Copyright Auditing for LLMs via Content-level Output Similarity Authors: Yuchen Yang, Yiming Li, Hongwei Yao, Enhao Huang, Shuo Shao, Bingrun Yang, Zhibo Wang, Dacheng Tao, Zhan Qin | Published: 2025-09-03 Prompt validationPrompt leakingModel Extraction Attack 2025.09.03 2025.09.05 Literature Database
EverTracer: Hunting Stolen Large Language Models via Stealthy and Robust Probabilistic Fingerprint Authors: Zhenhua Xu, Meng Han, Wenpeng Xing | Published: 2025-09-03 Disabling Safety Mechanisms of LLMData Protection MethodPrompt validation 2025.09.03 2025.09.05 Literature Database
PromptSleuth: Detecting Prompt Injection via Semantic Intent Invariance Authors: Mengxiao Wang, Yuxuan Zhang, Guofei Gu | Published: 2025-08-28 Indirect Prompt InjectionPrompt InjectionPrompt validation 2025.08.28 2025.09.01 Literature Database
Attacking interpretable NLP systems Authors: Eldor Abdukhamidov, Tamer Abuhmed, Joanna C. S. Santos, Mohammed Abuhamad | Published: 2025-07-22 Prompt InjectionPrompt validationAdversarial Attack Methods 2025.07.22 2025.07.24 Literature Database