Literature Database

Exploit Tool Invocation Prompt for Tool Behavior Hijacking in LLM-Based Agentic System

Authors: Yu Liu, Yuchong Xie, Mingyu Luo, Zesen Liu, Zhixiang Zhang, Kaikai Zhang, Zongjie Li, Ping Chen, Shuai Wang, Dongdong She | Published: 2025-09-06 | Updated: 2025-09-15
Prompt Injection
Model DoS
Attack Evaluation

Self-adaptive Dataset Construction for Real-World Multimodal Safety Scenarios

Authors: Jingen Qu, Lijun Li, Bo Zhang, Yichen Yan, Jing Shao | Published: 2025-09-04
Prompt Injection
Risk Analysis Method
安全性評価手法

An Automated, Scalable Machine Learning Model Inversion Assessment Pipeline

Authors: Tyler Shumaker, Jessica Carpenter, David Saranchak, Nathaniel D. Bastian | Published: 2025-09-04
Model Inversion
Model Extraction Attack
Risk Analysis Method

KubeGuard: LLM-Assisted Kubernetes Hardening via Configuration Files and Runtime Logs Analysis

Authors: Omri Sgan Cohen, Ehud Malul, Yair Meidan, Dudu Mimran, Yuval Elovici, Asaf Shabtai | Published: 2025-09-04
Security Strategy Generation
Network Forensics
監査ログ分析

NeuroBreak: Unveil Internal Jailbreak Mechanisms in Large Language Models

Authors: Chuhan Zhang, Ye Zhang, Bowen Shi, Yuyou Gan, Tianyu Du, Shouling Ji, Dazhan Deng, Yingcai Wu | Published: 2025-09-04
Prompt Injection
神経細胞と安全性
Defense Mechanism

Federated Learning: An approach with Hybrid Homomorphic Encryption

Authors: Pedro Correia, Ivan Silva, Ivone Amorim, Eva Maia, Isabel Praça | Published: 2025-09-03
Integration of FL and HE
Privacy Design Principles
Federated Learning

VulnRepairEval: An Exploit-Based Evaluation Framework for Assessing Large Language Model Vulnerability Repair Capabilities

Authors: Weizhe Wang, Wei Ma, Qiang Hu, Yao Zhang, Jianfei Sun, Bin Wu, Yang Liu, Guangquan Xu, Lingxiao Jiang | Published: 2025-09-03
Prompt Injection
Large Language Model
Vulnerability Analysis

A Comprehensive Guide to Differential Privacy: From Theory to User Expectations

Authors: Napsu Karmitsa, Antti Airola, Tapio Pahikkala, Tinja Pitkämäki | Published: 2025-09-03
Detection of Poison Data for Backdoor Attacks
Privacy Design Principles
Differential Privacy

PromptCOS: Towards System Prompt Copyright Auditing for LLMs via Content-level Output Similarity

Authors: Yuchen Yang, Yiming Li, Hongwei Yao, Enhao Huang, Shuo Shao, Bingrun Yang, Zhibo Wang, Dacheng Tao, Zhan Qin | Published: 2025-09-03
Prompt validation
Prompt leaking
Model Extraction Attack

EverTracer: Hunting Stolen Large Language Models via Stealthy and Robust Probabilistic Fingerprint

Authors: Zhenhua Xu, Meng Han, Wenpeng Xing | Published: 2025-09-03
Disabling Safety Mechanisms of LLM
Data Protection Method
Prompt validation