Signed-Prompt: A New Approach to Prevent Prompt Injection Attacks Against LLM-Integrated Applications Authors: Xuchen Suo | Published: 2024-01-15 LLM SecurityPrompt Injection 2024.01.15 2025.05.27 Literature Database
Detection and Defense Against Prominent Attacks on Preconditioned LLM-Integrated Virtual Assistants Authors: Chun Fai Chan, Daniel Wankit Yip, Aysan Esmradi | Published: 2024-01-02 LLM SecurityCharacter Role ActingSystem Prompt Generation 2024.01.02 2025.05.27 Literature Database
A Novel Evaluation Framework for Assessing Resilience Against Prompt Injection Attacks in Large Language Models Authors: Daniel Wankit Yip, Aysan Esmradi, Chun Fai Chan | Published: 2024-01-02 LLM SecurityPrompt InjectionAttack Evaluation 2024.01.02 2025.05.27 Literature Database
Jatmo: Prompt Injection Defense by Task-Specific Finetuning Authors: Julien Piet, Maha Alrashed, Chawin Sitawarin, Sizhe Chen, Zeming Wei, Elizabeth Sun, Basel Alomair, David Wagner | Published: 2023-12-29 | Updated: 2024-01-08 LLM SecurityCyber AttackPrompt Injection 2023.12.29 2025.05.27 Literature Database
MetaAID 2.5: A Secure Framework for Developing Metaverse Applications via Large Language Models Authors: Hongyin Zhu | Published: 2023-12-22 LLM SecurityData GenerationPrompt Injection 2023.12.22 2025.05.27 Literature Database
No-Skim: Towards Efficiency Robustness Evaluation on Skimming-based Language Models Authors: Shengyao Zhang, Mi Zhang, Xudong Pan, Min Yang | Published: 2023-12-15 | Updated: 2023-12-18 Evolution of AILLM SecurityWatermarking 2023.12.15 2025.05.27 Literature Database
Maatphor: Automated Variant Analysis for Prompt Injection Attacks Authors: Ahmed Salem, Andrew Paverd, Boris Köpf | Published: 2023-12-12 LLM SecurityPrompt InjectionEvaluation Method 2023.12.12 2025.05.27 Literature Database
Make Them Spill the Beans! Coercive Knowledge Extraction from (Production) LLMs Authors: Zhuo Zhang, Guangyu Shen, Guanhong Tao, Siyuan Cheng, Xiangyu Zhang | Published: 2023-12-08 LLM SecurityPrompt InjectionInappropriate Content Generation 2023.12.08 2025.05.28 Literature Database
Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks Authors: Shuli Jiang, Swanand Ravindra Kadhe, Yi Zhou, Ling Cai, Nathalie Baracaldo | Published: 2023-12-07 LLM SecurityPoisoning AttackModel Performance Evaluation 2023.12.07 2025.05.28 Literature Database
DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions Authors: Fangzhou Wu, Xiaogeng Liu, Chaowei Xiao | Published: 2023-12-07 | Updated: 2023-12-12 LLM SecurityCode GenerationPrompt Injection 2023.12.07 2025.05.28 Literature Database