AIセキュリティポータルbot

ControlNET: A Firewall for RAG-based LLM System

Authors: Hongwei Yao, Haoran Shi, Yidou Chen, Yixin Jiang, Cong Wang, Zhan Qin | Published: 2025-04-13 | Updated: 2025-04-17
Poisoning attack on RAG
Indirect Prompt Injection
Data Breach Risk

CheatAgent: Attacking LLM-Empowered Recommender Systems via LLM Agent

Authors: Liang-bo Ning, Shijie Wang, Wenqi Fan, Qing Li, Xin Xu, Hao Chen, Feiran Huang | Published: 2025-04-13 | Updated: 2025-04-24
Indirect Prompt Injection
Prompt Injection
Attacker Behavior Analysis

On the Practice of Deep Hierarchical Ensemble Network for Ad Conversion Rate Prediction

Authors: Jinfeng Zhuang, Yinrui Li, Runze Su, Ke Xu, Zhixuan Shao, Kungang Li, Ling Leng, Han Sun, Meng Qi, Yixiong Meng, Yang Tang, Zhifang Liu, Qifei Shen, Aayush Mudgal, Caleb Lu, Jie Liu, Hongda Shen | Published: 2025-04-10 | Updated: 2025-04-23
User Experience Evaluation
Improvement of Learning
Machine Learning Application

PR-Attack: Coordinated Prompt-RAG Attacks on Retrieval-Augmented Generation in Large Language Models via Bilevel Optimization

Authors: Yang Jiao, Xiaodong Wang, Kai Yang | Published: 2025-04-10 | Updated: 2025-04-17
LLM Performance Evaluation
Poisoning attack on RAG
Adversarial Attack Assessment

LLM-IFT: LLM-Powered Information Flow Tracking for Secure Hardware

Authors: Nowfel Mashnoor, Mohammad Akyash, Hadi Kamali, Kimia Azar | Published: 2025-04-09
Disabling Safety Mechanisms of LLM
Framework
Efficient Configuration Verification

Large-Scale (Semi-)Automated Security Assessment of Consumer IoT Devices — A Roadmap

Authors: Pascal Schöttle, Matthias Janetschek, Florian Merkle, Martin Nocker, Christoph Egger | Published: 2025-04-09 | Updated: 2025-04-10
IoT Security Framework
Security Testing
Communication System

Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs

Authors: Alhad Daftardar, Jianqiao Mo, Joey Ah-kiow, Benedikt Bünz, Ramesh Karri, Siddharth Garg, Brandon Reagen | Published: 2025-04-08
Efficient Proof System
Secure Arithmetic Computation
Watermark Design

CTI-HAL: A Human-Annotated Dataset for Cyber Threat Intelligence Analysis

Authors: Sofia Della Penna, Roberto Natella, Vittorio Orbinato, Lorenzo Parracino, Luciano Pianese | Published: 2025-04-08
LLM Application
Model Performance Evaluation
Large Language Model

Separator Injection Attack: Uncovering Dialogue Biases in Large Language Models Caused by Role Separators

Authors: Xitao Li, Haijun Wang, Jiang Wu, Ting Liu | Published: 2025-04-08
Indirect Prompt Injection
Prompting Strategy
Model Performance Evaluation

Sugar-Coated Poison: Benign Generation Unlocks LLM Jailbreaking

Authors: Yu-Hang Wu, Yu-Jie Xiong, Jie-Zhang | Published: 2025-04-08
LLM Application
Prompt Injection
Large Language Model