AIセキュリティポータルbot

Prompt Flow Integrity to Prevent Privilege Escalation in LLM Agents

Authors: Juhee Kim, Woohyuk Choi, Byoungyoung Lee | Published: 2025-03-17 | Updated: 2025-04-21
Indirect Prompt Injection
Data Flow Analysis
Attack Method

BLIA: Detect model memorization in binary classification model through passive Label Inference attack

Authors: Mohammad Wahiduzzaman Khan, Sheng Chen, Ilya Mironov, Leizhen Zhang, Rabib Noor | Published: 2025-03-17
Data Curation
Differential Privacy
Attack Method

Enforcing Cybersecurity Constraints for LLM-driven Robot Agents for Online Transactions

Authors: Shraddha Pradipbhai Shah, Aditya Vilas Deshpande | Published: 2025-03-17
Indirect Prompt Injection
Cyber Threat
User Authentication System

Research on Large Language Model Cross-Cloud Privacy Protection and Collaborative Training based on Federated Learning

Authors: Ze Yang, Yihong Jin, Yihan Zhang, Juntian Liu, Xinhe Xu | Published: 2025-03-15
Indirect Prompt Injection
Data Protection Method
Privacy Protection Method

TFHE-Coder: Evaluating LLM-agentic Fully Homomorphic Encryption Code Generation

Authors: Mayank Kumar, Jiaqi Xue, Mengxin Zheng, Qian Lou | Published: 2025-03-15
Few-Shot Learning
RAG
Deep Learning

Winning the MIDST Challenge: New Membership Inference Attacks on Diffusion Models for Tabular Data Synthesis

Authors: Xiaoyu Wu, Yifei Pang, Terrance Liu, Steven Wu | Published: 2025-03-15
Data Generation Method
Membership Disclosure Risk
Attack Method

Identifying Likely-Reputable Blockchain Projects on Ethereum

Authors: Cyrus Malik, Josef Bajada, Joshua Ellul | Published: 2025-03-14
Data Extraction and Analysis
Risk Analysis Method
Feature Engineering

Trust Under Siege: Label Spoofing Attacks against Machine Learning for Android Malware Detection

Authors: Tianwei Lan, Luca Demetrio, Farid Nait-Abdesselam, Yufei Han, Simone Aonzo | Published: 2025-03-14
Backdoor Attack
Label
Attack Method

Synthesizing Access Control Policies using Large Language Models

Authors: Adarsh Vatsa, Pratyush Patel, William Eiers | Published: 2025-03-14
Bias Detection in AI Output
Data Generation Method
Privacy Design Principles

Align in Depth: Defending Jailbreak Attacks via Progressive Answer Detoxification

Authors: Yingjie Zhang, Tong Liu, Zhe Zhao, Guozhu Meng, Kai Chen | Published: 2025-03-14
Disabling Safety Mechanisms of LLM
Prompt Injection
Malicious Prompt