Large Language Model

Terrarium: Revisiting the Blackboard for Multi-Agent Safety, Privacy, and Security Studies

Authors: Mason Nakamura, Abhinav Kumar, Saaduddin Mahmud, Sahar Abdelnabi, Shlomo Zilberstein, Eugene Bagdasarian | Published: 2025-10-16
エージェント設計
Large Language Model
通信プロトコル

In-Browser LLM-Guided Fuzzing for Real-Time Prompt Injection Testing in Agentic AI Browsers

Authors: Avihay Cohen | Published: 2025-10-15
Indirect Prompt Injection
Large Language Model
自動生成フレームワーク

Who Speaks for the Trigger? Dynamic Expert Routing in Backdoored Mixture-of-Experts Transformers

Authors: Xin Zhao, Xiaojun Chen, Bingshan Liu, Haoyu Gao, Zhendong Zhao, Yilong Chen | Published: 2025-10-15
Backdoor Detection
Prompt leaking
Large Language Model

Evaluating and Mitigating LLM-as-a-judge Bias in Communication Systems

Authors: Jiaxin Gao, Chen Chen, Yanwen Jia, Xueluan Gong, Kwok-Yan Lam, Qian Wang | Published: 2025-10-14
Bias
Prompt leaking
Large Language Model

Traveling Salesman-Based Token Ordering Improves Stability in Homomorphically Encrypted Language Models

Authors: Donghwan Rho, Sieun Seo, Hyewon Sung, Chohong Min, Ernest K. Ryu | Published: 2025-10-14
Token Distribution Analysis
Membership Inference
Large Language Model

PromptLocate: Localizing Prompt Injection Attacks

Authors: Yuqi Jia, Yupei Liu, Zedian Shao, Jinyuan Jia, Neil Gong | Published: 2025-10-14
Prompt validation
Large Language Model
evaluation metrics

PACEbench: A Framework for Evaluating Practical AI Cyber-Exploitation Capabilities

Authors: Zicheng Liu, Lige Huang, Jie Zhang, Dongrui Liu, Yuan Tian, Jing Shao | Published: 2025-10-13
Security Analysis Method
Large Language Model
Defense Mechanism

Machine Unlearning Meets Adversarial Robustness via Constrained Interventions on LLMs

Authors: Fatmazohra Rezkellah, Ramzi Dakhmouche | Published: 2025-10-03 | Updated: 2025-10-15
Identification of AI Output
Robustness
Large Language Model

NEXUS: Network Exploration for eXploiting Unsafe Sequences in Multi-Turn LLM Jailbreaks

Authors: Javad Rafiei Asl, Sidhant Narula, Mohammad Ghasemigol, Eduardo Blanco, Daniel Takabi | Published: 2025-10-03 | Updated: 2025-10-21
Prompt Injection
Large Language Model
脱獄手法

Bypassing Prompt Guards in Production with Controlled-Release Prompting

Authors: Jaiden Fairoze, Sanjam Garg, Keewoo Lee, Mingyuan Wang | Published: 2025-10-02
Prompt Injection
Large Language Model
Structural Attack