Prompt Injection

GuardTrace-VL: Detecting Unsafe Multimodel Reasoning via Iterative Safety Supervision

Authors: Yuxiao Xiang, Junchi Chen, Zhenchao Jin, Changtao Miao, Haojie Yuan, Qi Chu, Tao Gong, Nenghai Yu | Published: 2025-11-26
Prompt Injection
Risk Assessment Method
Ethical Considerations

Can LLMs Make (Personalized) Access Control Decisions?

Authors: Friederike Groschupp, Daniele Lain, Aritra Dhar, Lara Magdalena Lazier, Srdjan Čapkun | Published: 2025-11-25
Disabling Safety Mechanisms of LLM
Privacy Assessment
Prompt Injection

Adversarial Attack-Defense Co-Evolution for LLM Safety Alignment via Tree-Group Dual-Aware Search and Optimization

Authors: Xurui Li, Kaisong Song, Rui Zhu, Pin-Yu Chen, Haixu Tang | Published: 2025-11-24
Prompt Injection
Large Language Model
Malicious Prompt

Can LLMs Threaten Human Survival? Benchmarking Potential Existential Threats from LLMs via Prefix Completion

Authors: Yu Cui, Yifei Liu, Hang Fu, Sicheng Pan, Haibin Zhang, Cong Zuo, Licheng Wang | Published: 2025-11-24
Indirect Prompt Injection
Prompt Injection
Risk Assessment Method

Understanding and Mitigating Over-refusal for Large Language Models via Safety Representation

Authors: Junbo Zhang, Ran Chen, Qianli Zhou, Xinyang Deng, Wen Jiang | Published: 2025-11-24
Disabling Safety Mechanisms of LLM
Prompt Injection
Malicious Prompt

Small Language Models for Phishing Website Detection: Cost, Performance, and Privacy Trade-Offs

Authors: Georg Goldenits, Philip Koenig, Sebastian Raubitzek, Andreas Ekelhart | Published: 2025-11-19
フィッシング検出手法
Prompt Injection
Prompt Engineering

Can MLLMs Detect Phishing? A Comprehensive Security Benchmark Suite Focusing on Dynamic Threats and Multimodal Evaluation in Academic Environments

Authors: Jingzhuo Zhou | Published: 2025-11-19
Privacy Risk Management
Prompt Injection
Large Language Model

ForgeDAN: An Evolutionary Framework for Jailbreaking Aligned Large Language Models

Authors: Siyang Cheng, Gaotian Liu, Rui Mei, Yilin Wang, Kejia Zhang, Kaishuo Wei, Yuqi Yu, Weiping Wen, Xiaojie Wu, Junhua Liu | Published: 2025-11-17
Prompt Injection
Large Language Model
Evolutionary Algorithm

SGuard-v1: Safety Guardrail for Large Language Models

Authors: JoonHo Lee, HyeonMin Cho, Jaewoong Yun, Hyunjae Lee, JunKyu Lee, Juree Seok | Published: 2025-11-16
Prompt Injection
Malicious Prompt
Adaptive Misuse Detection

SeedAIchemy: LLM-Driven Seed Corpus Generation for Fuzzing

Authors: Aidan Wen, Norah A. Alzahrani, Jingzhi Jiang, Andrew Joe, Karen Shieh, Andy Zhang, Basel Alomair, David Wagner | Published: 2025-11-16
バグ検出手法
Prompt Injection
Information Security