BitHydra: Towards Bit-flip Inference Cost Attack against Large Language Models Authors: Xiaobei Yan, Yiming Li, Zhaoxin Fan, Han Qiu, Tianwei Zhang | Published: 2025-05-22 LLM SecurityText Generation MethodPrompt Injection 2025.05.22 2025.05.28 Literature Database
Finetuning-Activated Backdoors in LLMs Authors: Thibaud Gloaguen, Mark Vero, Robin Staab, Martin Vechev | Published: 2025-05-22 LLM SecurityBackdoor AttackPrompt Injection 2025.05.22 2025.05.28 Literature Database
Can Large Language Models Really Recognize Your Name? Authors: Dzung Pham, Peter Kairouz, Niloofar Mireshghallah, Eugene Bagdasarian, Chau Minh Pham, Amir Houmansadr | Published: 2025-05-20 LLM SecurityIndirect Prompt InjectionPrivacy Leakage 2025.05.20 2025.05.28 Literature Database
Is Your Prompt Safe? Investigating Prompt Injection Attacks Against Open-Source LLMs Authors: Jiawen Wang, Pritha Gupta, Ivan Habernal, Eyke Hüllermeier | Published: 2025-05-20 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.05.20 2025.05.28 Literature Database
Exploring Jailbreak Attacks on LLMs through Intent Concealment and Diversion Authors: Tiehan Cui, Yanxu Mao, Peipei Liu, Congying Liu, Datao You | Published: 2025-05-20 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.05.20 2025.05.28 Literature Database
Fixing 7,400 Bugs for 1$: Cheap Crash-Site Program Repair Authors: Han Zheng, Ilia Shumailov, Tianqi Fan, Aiden Hall, Mathias Payer | Published: 2025-05-19 LLM Securityバグ修正手法Watermarking Technology 2025.05.19 2025.05.28 Literature Database
The Hidden Dangers of Browsing AI Agents Authors: Mykyta Mudryi, Markiyan Chaklosh, Grzegorz Wójcik | Published: 2025-05-19 LLM SecurityIndirect Prompt InjectionAttack Method 2025.05.19 2025.05.28 Literature Database
Evaluating the efficacy of LLM Safety Solutions : The Palit Benchmark Dataset Authors: Sayon Palit, Daniel Woods | Published: 2025-05-19 | Updated: 2025-05-20 LLM SecurityPrompt InjectionAttack Method 2025.05.19 2025.05.28 Literature Database
From Assistants to Adversaries: Exploring the Security Risks of Mobile LLM Agents Authors: Liangxuan Wu, Chao Wang, Tianming Liu, Yanjie Zhao, Haoyu Wang | Published: 2025-05-19 | Updated: 2025-05-20 LLM SecurityIndirect Prompt InjectionAttack Method 2025.05.19 2025.05.28 Literature Database
Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks? Authors: Zi Liang, Haibo Hu, Qingqing Ye, Yaxin Xiao, Ronghua Li | Published: 2025-05-19 LLM SecurityPoisoning Attackrobustness requirements 2025.05.19 2025.05.28 Literature Database