An Investigation of Large Language Models and Their Vulnerabilities in Spam Detection Authors: Qiyao Tang, Xiangyang Li | Published: 2025-04-14 LLM Performance EvaluationPrompt InjectionModel DoS 2025.04.14 2025.05.12 Literature Database
CheatAgent: Attacking LLM-Empowered Recommender Systems via LLM Agent Authors: Liang-bo Ning, Shijie Wang, Wenqi Fan, Qing Li, Xin Xu, Hao Chen, Feiran Huang | Published: 2025-04-13 | Updated: 2025-04-24 Indirect Prompt InjectionPrompt InjectionAttacker Behavior Analysis 2025.04.13 2025.05.12 Literature Database
Sugar-Coated Poison: Benign Generation Unlocks LLM Jailbreaking Authors: Yu-Hang Wu, Yu-Jie Xiong, Jie-Zhang | Published: 2025-04-08 LLM ApplicationPrompt InjectionLarge Language Model 2025.04.08 2025.05.12 Literature Database
Generative Large Language Model usage in Smart Contract Vulnerability Detection Authors: Peter Ince, Jiangshan Yu, Joseph K. Liu, Xiaoning Du | Published: 2025-04-07 Prompt InjectionPrompt leakingVulnerability Analysis 2025.04.07 2025.05.12 Literature Database
Representation Bending for Large Language Model Safety Authors: Ashkan Yousefpour, Taeheon Kim, Ryan S. Kwon, Seungbeen Lee, Wonje Jeung, Seungju Han, Alvin Wan, Harrison Ngan, Youngjae Yu, Jonghyun Choi | Published: 2025-04-02 Prompt InjectionPrompt leakingSafety Alignment 2025.04.02 2025.05.12 Literature Database
LightDefense: A Lightweight Uncertainty-Driven Defense against Jailbreaks via Shifted Token Distribution Authors: Zhuoran Yang, Jie Peng, Zhen Tan, Tianlong Chen, Yanyong Zhang | Published: 2025-04-02 Prompt InjectionModel Performance EvaluationUncertainty Measurement 2025.04.02 2025.05.12 Literature Database
No Free Lunch with Guardrails Authors: Divyanshu Kumar, Nitin Aravind Birur, Tanay Baswa, Sahil Agarwal, Prashanth Harshangi | Published: 2025-04-01 | Updated: 2025-04-03 Prompt InjectionModel DoSInformation Security 2025.04.01 2025.05.12 Literature Database
Output Constraints as Attack Surface: Exploiting Structured Generation to Bypass LLM Safety Mechanisms Authors: Shuoming Zhang, Jiacheng Zhao, Ruiyuan Xu, Xiaobing Feng, Huimin Cui | Published: 2025-03-31 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.03.31 2025.05.12 Literature Database
Detecting Functional Bugs in Smart Contracts through LLM-Powered and Bug-Oriented Composite Analysis Authors: Binbin Zhao, Xingshuang Lin, Yuan Tian, Saman Zonouz, Na Ruan, Jiliang Li, Raheem Beyah, Shouling Ji | Published: 2025-03-31 Indirect Prompt InjectionSmart Contract AuditPrompt Injection 2025.03.31 2025.05.12 Literature Database
Prompt, Divide, and Conquer: Bypassing Large Language Model Safety Filters via Segmented and Distributed Prompt Processing Authors: Johan Wahréus, Ahmed Hussain, Panos Papadimitratos | Published: 2025-03-27 System DevelopmentPrompt InjectionLarge Language Model 2025.03.27 2025.05.12 Literature Database