AIセキュリティポータルbot

Cape: Context-Aware Prompt Perturbation Mechanism with Differential Privacy

Authors: Haoqi Wu, Wei Dai, Li Wang, Qiang Yan | Published: 2025-05-09 | Updated: 2025-05-15
Token Identification Method
Privacy Design Principles
Evaluation Method

AGENTFUZZER: Generic Black-Box Fuzzing for Indirect Prompt Injection against LLM Agents

Authors: Zhun Wang, Vincent Siu, Zhe Ye, Tianneng Shi, Yuzhou Nie, Xuandong Zhao, Chenguang Wang, Wenbo Guo, Dawn Song | Published: 2025-05-09 | Updated: 2025-05-21
Indirect Prompt Injection
Fuzzing
Attack Type

LLM-Text Watermarking based on Lagrange Interpolation

Authors: Jarosław Janas, Paweł Morawiecki, Josef Pieprzyk | Published: 2025-05-09 | Updated: 2025-05-13
LLM Security
Prompt leaking
Digital Watermarking for Generative AI

Revealing Weaknesses in Text Watermarking Through Self-Information Rewrite Attacks

Authors: Yixin Cheng, Hongcheng Guo, Yangming Li, Leonid Sigal | Published: 2025-05-08
Prompt leaking
Attack Method
Watermarking Technology

FedTDP: A Privacy-Preserving and Unified Framework for Trajectory Data Preparation via Federated Learning

Authors: Zhihao Zeng, Ziquan Fang, Wei Shao, Lu Chen, Yunjun Gao | Published: 2025-05-08
Privacy Design Principles
Model Design
Machine Learning Technology

A Weighted Byzantine Fault Tolerance Consensus Driven Trusted Multiple Large Language Models Network

Authors: Haoxiang Luo, Gang Sun, Yinqiu Liu, Dongcheng Zhao, Dusit Niyato, Hongfang Yu, Schahram Dustdar | Published: 2025-05-08
Byzantine Consensus Mechanism
Model DoS
Reliability Assessment

An Agent-Based Modeling Approach to Free-Text Keyboard Dynamics for Continuous Authentication

Authors: Roberto Dillon, Arushi | Published: 2025-05-08
Typing Behavior Model
User Characteristics
Machine Learning Application

Federated Learning for Cyber Physical Systems: A Comprehensive Survey

Authors: Minh K. Quan, Pubudu N. Pathirana, Mayuri Wijayasundara, Sujeeva Setunge, Dinh C. Nguyen, Christopher G. Brinton, David J. Love, H. Vincent Poor | Published: 2025-05-08
Decentralized FL-CPS
Machine Learning Application
Federated Learning

Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs

Authors: Chetan Pathade | Published: 2025-05-07 | Updated: 2025-05-13
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

Safeguard-by-Development: A Privacy-Enhanced Development Paradigm for Multi-Agent Collaboration Systems

Authors: Jian Cui, Zichuan Li, Luyi Xing, Xiaojing Liao | Published: 2025-05-07 | Updated: 2025-06-24
Privacy Protection
Privacy protection framework
Prompt Injection