AIセキュリティポータルbot

Data Exfiltration by Compression Attack: Definition and Evaluation on Medical Image Data

Authors: Huiyu Li, Nicholas Ayache, Hervé Delingette | Published: 2025-11-26
データ流出に関する分析手法
Data Flow Analysis
Image Processing

GuardTrace-VL: Detecting Unsafe Multimodel Reasoning via Iterative Safety Supervision

Authors: Yuxiao Xiang, Junchi Chen, Zhenchao Jin, Changtao Miao, Haojie Yuan, Qi Chu, Tao Gong, Nenghai Yu | Published: 2025-11-26
Prompt Injection
Risk Assessment Method
Ethical Considerations

Privacy-Preserving Federated Vision Transformer Learning Leveraging Lightweight Homomorphic Encryption in Medical AI

Authors: Al Amin, Kamrul Hasan, Liang Hong, Sharif Ullah | Published: 2025-11-26
Privacy Assessment
暗号化アルゴリズム
Federated Learning System

From One Attack Domain to Another: Contrastive Transfer Learning with Siamese Networks for APT Detection

Authors: Sidahmed Benabderrahmane, Talal Rahwan | Published: 2025-11-25
Poisoning
Feature Selection
Anomaly Detection Algorithm

APT-CGLP: Advanced Persistent Threat Hunting via Contrastive Graph-Language Pre-Training

Authors: Xuebo Qiu, Mingqi Lv, Yimei Zhang, Tieming Chen, Tiantian Zhu, Qijie Song, Shouling Ji | Published: 2025-11-25
Graph Transformation
Adversarial Learning
Deep Learning

Can LLMs Make (Personalized) Access Control Decisions?

Authors: Friederike Groschupp, Daniele Lain, Aritra Dhar, Lara Magdalena Lazier, Srdjan Čapkun | Published: 2025-11-25
Disabling Safety Mechanisms of LLM
Privacy Assessment
Prompt Injection

On the Feasibility of Hijacking MLLMs’ Decision Chain via One Perturbation

Authors: Changyue Li, Jiaying Li, Youliang Yuan, Jiaming He, Zhicong Huang, Pinjia He | Published: 2025-11-25
Robustness Improvement Method
Image Processing
Adaptive Adversarial Training

Adversarial Attack-Defense Co-Evolution for LLM Safety Alignment via Tree-Group Dual-Aware Search and Optimization

Authors: Xurui Li, Kaisong Song, Rui Zhu, Pin-Yu Chen, Haixu Tang | Published: 2025-11-24
Prompt Injection
Large Language Model
Malicious Prompt

Can LLMs Threaten Human Survival? Benchmarking Potential Existential Threats from LLMs via Prefix Completion

Authors: Yu Cui, Yifei Liu, Hang Fu, Sicheng Pan, Haibin Zhang, Cong Zuo, Licheng Wang | Published: 2025-11-24
Indirect Prompt Injection
Prompt Injection
Risk Assessment Method

Understanding and Mitigating Over-refusal for Large Language Models via Safety Representation

Authors: Junbo Zhang, Ran Chen, Qianli Zhou, Xinyang Deng, Wen Jiang | Published: 2025-11-24
Disabling Safety Mechanisms of LLM
Prompt Injection
Malicious Prompt