Argus: A Multi-Agent Sensitive Information Leakage Detection Framework Based on Hierarchical Reference Relationships Authors: Bin Wang, Hui Li, Liyang Zhang, Qijia Zhuang, Ao Yang, Dong Zhang, Xijun Luo, Bing Lin | Published: 2025-12-09 Privacy Leakage偽陽性分析Information Security 2025.12.09 2025.12.11 Literature Database
Systematization of Knowledge: Security and Safety in the Model Context Protocol Ecosystem Authors: Shiva Gaire, Srijan Gyawali, Saroj Mishra, Suman Niroula, Dilip Thakur, Umesh Yadav | Published: 2025-12-09 Poisoning attack on RAGCybersecurityInformation Security 2025.12.09 2025.12.11 Literature Database
Privacy Practices of Browser Agents Authors: Alisha Ukani, Hamed Haddadi, Ali Shahin Shamsabadi, Peter Snyder | Published: 2025-12-08 Indirect Prompt InjectionPrivacy AnalysisInformation Security 2025.12.08 2025.12.10 Literature Database
A Light-Weight Large Language Model File Format for Highly-Secure Model Distribution Authors: Huifeng Zhu, Shijie Li, Qinfeng Li, Yier Jin | Published: 2025-12-04 Model DoSDetection of Model Extraction AttacksInformation Security 2025.12.04 2025.12.06 Literature Database
SeedAIchemy: LLM-Driven Seed Corpus Generation for Fuzzing Authors: Aidan Wen, Norah A. Alzahrani, Jingzhi Jiang, Andrew Joe, Karen Shieh, Andy Zhang, Basel Alomair, David Wagner | Published: 2025-11-16 バグ検出手法Prompt InjectionInformation Security 2025.11.16 2025.11.18 Literature Database
GRAPHTEXTACK: A Realistic Black-Box Node Injection Attack on LLM-Enhanced GNNs Authors: Jiaji Ma, Puja Trivedi, Danai Koutra | Published: 2025-11-16 Poisoning attack on RAGClassification of Malicious ActorsInformation Security 2025.11.16 2025.11.18 Literature Database
Large Language Models for Cyber Security Authors: Raunak Somani, Aswani Kumar Cherukuri | Published: 2025-11-06 Poisoning attack on RAGIndirect Prompt InjectionInformation Security 2025.11.06 2025.11.08 Literature Database
Black-Box Guardrail Reverse-engineering Attack Authors: Hongwei Yao, Yun Xia, Shuo Shao, Haoran Shi, Tong Qiao, Cong Wang | Published: 2025-11-06 Disabling Safety Mechanisms of LLMPrompt leakingInformation Security 2025.11.06 2025.11.08 Literature Database
Hybrid Fuzzing with LLM-Guided Input Mutation and Semantic Feedback Authors: Shiyin Lin | Published: 2025-11-06 Prompt InjectionDynamic AnalysisInformation Security 2025.11.06 2025.11.08 Literature Database
Measuring the Security of Mobile LLM Agents under Adversarial Prompts from Untrusted Third-Party Channels Authors: Chenghao Du, Quanfeng Huang, Tingxuan Tang, Zihao Wang, Adwait Nadkarni, Yue Xiao | Published: 2025-10-31 | Updated: 2025-11-06 Indirect Prompt InjectionPrompt InjectionInformation Security 2025.10.31 2025.11.08 Literature Database