Literature Database

文献データベースでは、AIセキュリティに関する文献情報を分類・集約しています。詳しくは文献データベースについてをご覧ください。統計情報のページでは、収集された文献に関する統計情報を公開しています。
The Literature Database categorizes and aggregates literature related to AI security. For more details, please see About Literature Database. We provide statistical information regarding the Literature Database on the Statistics page.

Constructing and Benchmarking: a Labeled Email Dataset for Text-Based Phishing and Spam Detection Framework

Authors: Rebeka Toth, Tamas Bisztray, Richard Dubniczky | Published: 2025-11-26
Social Engineering Attack
Dataset Integration
Prompt Injection

Data Exfiltration by Compression Attack: Definition and Evaluation on Medical Image Data

Authors: Huiyu Li, Nicholas Ayache, Hervé Delingette | Published: 2025-11-26
データ流出に関する分析手法
Data Flow Analysis
Image Processing

GuardTrace-VL: Detecting Unsafe Multimodel Reasoning via Iterative Safety Supervision

Authors: Yuxiao Xiang, Junchi Chen, Zhenchao Jin, Changtao Miao, Haojie Yuan, Qi Chu, Tao Gong, Nenghai Yu | Published: 2025-11-26
Prompt Injection
Risk Assessment Method
Ethical Considerations

Privacy-Preserving Federated Vision Transformer Learning Leveraging Lightweight Homomorphic Encryption in Medical AI

Authors: Al Amin, Kamrul Hasan, Liang Hong, Sharif Ullah | Published: 2025-11-26
Privacy Assessment
暗号化アルゴリズム
Federated Learning System

From One Attack Domain to Another: Contrastive Transfer Learning with Siamese Networks for APT Detection

Authors: Sidahmed Benabderrahmane, Talal Rahwan | Published: 2025-11-25
Poisoning
Feature Selection
Anomaly Detection Algorithm

APT-CGLP: Advanced Persistent Threat Hunting via Contrastive Graph-Language Pre-Training

Authors: Xuebo Qiu, Mingqi Lv, Yimei Zhang, Tieming Chen, Tiantian Zhu, Qijie Song, Shouling Ji | Published: 2025-11-25
Graph Transformation
Adversarial Learning
Deep Learning

Can LLMs Make (Personalized) Access Control Decisions?

Authors: Friederike Groschupp, Daniele Lain, Aritra Dhar, Lara Magdalena Lazier, Srdjan Čapkun | Published: 2025-11-25
Disabling Safety Mechanisms of LLM
Privacy Assessment
Prompt Injection

On the Feasibility of Hijacking MLLMs’ Decision Chain via One Perturbation

Authors: Changyue Li, Jiaying Li, Youliang Yuan, Jiaming He, Zhicong Huang, Pinjia He | Published: 2025-11-25
Robustness Improvement Method
Image Processing
Adaptive Adversarial Training

Adversarial Attack-Defense Co-Evolution for LLM Safety Alignment via Tree-Group Dual-Aware Search and Optimization

Authors: Xurui Li, Kaisong Song, Rui Zhu, Pin-Yu Chen, Haixu Tang | Published: 2025-11-24
Prompt Injection
Large Language Model
Malicious Prompt

Can LLMs Threaten Human Survival? Benchmarking Potential Existential Threats from LLMs via Prefix Completion

Authors: Yu Cui, Yifei Liu, Hang Fu, Sicheng Pan, Haibin Zhang, Cong Zuo, Licheng Wang | Published: 2025-11-24
Indirect Prompt Injection
Prompt Injection
Risk Assessment Method