Information Security

Argus: A Multi-Agent Sensitive Information Leakage Detection Framework Based on Hierarchical Reference Relationships

Authors: Bin Wang, Hui Li, Liyang Zhang, Qijia Zhuang, Ao Yang, Dong Zhang, Xijun Luo, Bing Lin | Published: 2025-12-09
Privacy Leakage
偽陽性分析
Information Security

Systematization of Knowledge: Security and Safety in the Model Context Protocol Ecosystem

Authors: Shiva Gaire, Srijan Gyawali, Saroj Mishra, Suman Niroula, Dilip Thakur, Umesh Yadav | Published: 2025-12-09
Poisoning attack on RAG
Cybersecurity
Information Security

Privacy Practices of Browser Agents

Authors: Alisha Ukani, Hamed Haddadi, Ali Shahin Shamsabadi, Peter Snyder | Published: 2025-12-08
Indirect Prompt Injection
Privacy Analysis
Information Security

A Light-Weight Large Language Model File Format for Highly-Secure Model Distribution

Authors: Huifeng Zhu, Shijie Li, Qinfeng Li, Yier Jin | Published: 2025-12-04
Model DoS
Detection of Model Extraction Attacks
Information Security

SeedAIchemy: LLM-Driven Seed Corpus Generation for Fuzzing

Authors: Aidan Wen, Norah A. Alzahrani, Jingzhi Jiang, Andrew Joe, Karen Shieh, Andy Zhang, Basel Alomair, David Wagner | Published: 2025-11-16
バグ検出手法
Prompt Injection
Information Security

GRAPHTEXTACK: A Realistic Black-Box Node Injection Attack on LLM-Enhanced GNNs

Authors: Jiaji Ma, Puja Trivedi, Danai Koutra | Published: 2025-11-16
Poisoning attack on RAG
Classification of Malicious Actors
Information Security

Large Language Models for Cyber Security

Authors: Raunak Somani, Aswani Kumar Cherukuri | Published: 2025-11-06
Poisoning attack on RAG
Indirect Prompt Injection
Information Security

Black-Box Guardrail Reverse-engineering Attack

Authors: Hongwei Yao, Yun Xia, Shuo Shao, Haoran Shi, Tong Qiao, Cong Wang | Published: 2025-11-06
Disabling Safety Mechanisms of LLM
Prompt leaking
Information Security

Hybrid Fuzzing with LLM-Guided Input Mutation and Semantic Feedback

Authors: Shiyin Lin | Published: 2025-11-06
Prompt Injection
Dynamic Analysis
Information Security

Measuring the Security of Mobile LLM Agents under Adversarial Prompts from Untrusted Third-Party Channels

Authors: Chenghao Du, Quanfeng Huang, Tingxuan Tang, Zihao Wang, Adwait Nadkarni, Yue Xiao | Published: 2025-10-31 | Updated: 2025-11-06
Indirect Prompt Injection
Prompt Injection
Information Security