From Assistants to Adversaries: Exploring the Security Risks of Mobile LLM Agents Authors: Liangxuan Wu, Chao Wang, Tianming Liu, Yanjie Zhao, Haoyu Wang | Published: 2025-05-19 | Updated: 2025-05-20 LLM SecurityIndirect Prompt InjectionAttack Method 2025.05.19 2025.05.28 Literature Database
Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks? Authors: Zi Liang, Haibo Hu, Qingqing Ye, Yaxin Xiao, Ronghua Li | Published: 2025-05-19 LLM SecurityPoisoning Attackrobustness requirements 2025.05.19 2025.05.28 Literature Database
Malware families discovery via Open-Set Recognition on Android manifest permissions Authors: Filippo Leveni, Matteo Mistura, Francesco Iubatti, Carmine Giangregorio, Nicolò Pastore, Cesare Alippi, Giacomo Boracchi | Published: 2025-05-19 Online Malware DetectionDataset for Malware ClassificationMalware Detection Method 2025.05.19 2025.05.28 Literature Database
Web IP at Risk: Prevent Unauthorized Real-Time Retrieval by Large Language Models Authors: Yisheng Zhong, Yizhu Wen, Junfeng Guo, Mehran Kafai, Heng Huang, Hanqing Guo, Zhuangdi Zhu | Published: 2025-05-19 LLM SecurityIndirect Prompt InjectionPrivacy Management 2025.05.19 2025.05.28 Literature Database
R1dacted: Investigating Local Censorship in DeepSeek’s R1 Language Model Authors: Ali Naseh, Harsh Chaudhari, Jaechul Roh, Mingshi Wu, Alina Oprea, Amir Houmansadr | Published: 2025-05-19 Bias Detection in AI OutputPrompt leaking検閲行動 2025.05.19 2025.05.28 Literature Database
IP Leakage Attacks Targeting LLM-Based Multi-Agent Systems Authors: Liwen Wang, Wenxuan Wang, Shuai Wang, Zongjie Li, Zhenlan Ji, Zongyi Lyu, Daoyuan Wu, Shing-Chi Cheung | Published: 2025-05-18 | Updated: 2025-05-20 Indirect Prompt InjectionPrivacy Leakage情報伝播手法 2025.05.18 2025.05.28 Literature Database
MARVEL: Multi-Agent RTL Vulnerability Extraction using Large Language Models Authors: Luca Collini, Baleegh Ahmad, Joey Ah-kiow, Ramesh Karri | Published: 2025-05-17 | Updated: 2025-06-09 Poisoning attack on RAGCyber ThreatPrompt Injection 2025.05.17 2025.06.11 Literature Database
JULI: Jailbreak Large Language Models by Self-Introspection Authors: Jesson Wang, Zhanhao Hu, David Wagner | Published: 2025-05-17 | Updated: 2025-05-20 API SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.05.17 2025.05.28 Literature Database
Phare: A Safety Probe for Large Language Models Authors: Pierre Le Jeune, Benoît Malézieux, Weixuan Xiao, Matteo Dora | Published: 2025-05-16 | Updated: 2025-05-19 RAGBias Mitigation TechniquesHallucination 2025.05.16 2025.05.28 Literature Database
S3C2 Summit 2024-09: Industry Secure Software Supply Chain Summit Authors: Imranur Rahman, Yasemin Acar, Michel Cukier, William Enck, Christian Kastner, Alexandros Kapravelos, Dominik Wermke, Laurie Williams | Published: 2025-05-15 LLM Securityソフトウェア供給チェーンセキュリティ教育と自動化のバランス 2025.05.15 2025.05.28 Literature Database