Literature Database

Privacy-Preserving Federated Vision Transformer Learning Leveraging Lightweight Homomorphic Encryption in Medical AI

Authors: Al Amin, Kamrul Hasan, Liang Hong, Sharif Ullah | Published: 2025-11-26
Privacy Assessment
暗号化アルゴリズム
Federated Learning System

From One Attack Domain to Another: Contrastive Transfer Learning with Siamese Networks for APT Detection

Authors: Sidahmed Benabderrahmane, Talal Rahwan | Published: 2025-11-25
Poisoning
Feature Selection
Anomaly Detection Algorithm

APT-CGLP: Advanced Persistent Threat Hunting via Contrastive Graph-Language Pre-Training

Authors: Xuebo Qiu, Mingqi Lv, Yimei Zhang, Tieming Chen, Tiantian Zhu, Qijie Song, Shouling Ji | Published: 2025-11-25
Graph Transformation
Adversarial Learning
Deep Learning

Can LLMs Make (Personalized) Access Control Decisions?

Authors: Friederike Groschupp, Daniele Lain, Aritra Dhar, Lara Magdalena Lazier, Srdjan Čapkun | Published: 2025-11-25
Disabling Safety Mechanisms of LLM
Privacy Assessment
Prompt Injection

On the Feasibility of Hijacking MLLMs’ Decision Chain via One Perturbation

Authors: Changyue Li, Jiaying Li, Youliang Yuan, Jiaming He, Zhicong Huang, Pinjia He | Published: 2025-11-25
Robustness Improvement Method
Image Processing
Adaptive Adversarial Training

Adversarial Attack-Defense Co-Evolution for LLM Safety Alignment via Tree-Group Dual-Aware Search and Optimization

Authors: Xurui Li, Kaisong Song, Rui Zhu, Pin-Yu Chen, Haixu Tang | Published: 2025-11-24
Prompt Injection
Large Language Model
Malicious Prompt

Can LLMs Threaten Human Survival? Benchmarking Potential Existential Threats from LLMs via Prefix Completion

Authors: Yu Cui, Yifei Liu, Hang Fu, Sicheng Pan, Haibin Zhang, Cong Zuo, Licheng Wang | Published: 2025-11-24
Indirect Prompt Injection
Prompt Injection
Risk Assessment Method

Understanding and Mitigating Over-refusal for Large Language Models via Safety Representation

Authors: Junbo Zhang, Ran Chen, Qianli Zhou, Xinyang Deng, Wen Jiang | Published: 2025-11-24
Disabling Safety Mechanisms of LLM
Prompt Injection
Malicious Prompt

LLM-CSEC: Empirical Evaluation of Security in C/C++ Code Generated by Large Language Models

Authors: Muhammad Usman Shahid, Chuadhry Mujeeb Ahmed, Rajiv Ranjan | Published: 2025-11-24
Automation of Cybersecurity
Prompt leaking
Risk Assessment Method

Defending Large Language Models Against Jailbreak Exploits with Responsible AI Considerations

Authors: Ryan Wong, Hosea David Yu Fei Ng, Dhananjai Sharma, Glenn Jun Jie Ng, Kavishvaran Srinivasan | Published: 2025-11-24
Ethical Considerations
Large Language Model
Malicious Prompt