GuardTrace-VL: Detecting Unsafe Multimodel Reasoning via Iterative Safety Supervision Authors: Yuxiao Xiang, Junchi Chen, Zhenchao Jin, Changtao Miao, Haojie Yuan, Qi Chu, Tao Gong, Nenghai Yu | Published: 2025-11-26 Prompt InjectionRisk Assessment MethodEthical Considerations 2025.11.26 2025.11.28 Literature Database
Can LLMs Make (Personalized) Access Control Decisions? Authors: Friederike Groschupp, Daniele Lain, Aritra Dhar, Lara Magdalena Lazier, Srdjan Čapkun | Published: 2025-11-25 Disabling Safety Mechanisms of LLMPrivacy AssessmentPrompt Injection 2025.11.25 2025.11.27 Literature Database
Adversarial Attack-Defense Co-Evolution for LLM Safety Alignment via Tree-Group Dual-Aware Search and Optimization Authors: Xurui Li, Kaisong Song, Rui Zhu, Pin-Yu Chen, Haixu Tang | Published: 2025-11-24 Prompt InjectionLarge Language ModelMalicious Prompt 2025.11.24 2025.11.26 Literature Database
Can LLMs Threaten Human Survival? Benchmarking Potential Existential Threats from LLMs via Prefix Completion Authors: Yu Cui, Yifei Liu, Hang Fu, Sicheng Pan, Haibin Zhang, Cong Zuo, Licheng Wang | Published: 2025-11-24 Indirect Prompt InjectionPrompt InjectionRisk Assessment Method 2025.11.24 2025.11.26 Literature Database
Understanding and Mitigating Over-refusal for Large Language Models via Safety Representation Authors: Junbo Zhang, Ran Chen, Qianli Zhou, Xinyang Deng, Wen Jiang | Published: 2025-11-24 Disabling Safety Mechanisms of LLMPrompt InjectionMalicious Prompt 2025.11.24 2025.11.26 Literature Database
Small Language Models for Phishing Website Detection: Cost, Performance, and Privacy Trade-Offs Authors: Georg Goldenits, Philip Koenig, Sebastian Raubitzek, Andreas Ekelhart | Published: 2025-11-19 フィッシング検出手法Prompt InjectionPrompt Engineering 2025.11.19 2025.11.21 Literature Database
Can MLLMs Detect Phishing? A Comprehensive Security Benchmark Suite Focusing on Dynamic Threats and Multimodal Evaluation in Academic Environments Authors: Jingzhuo Zhou | Published: 2025-11-19 Privacy Risk ManagementPrompt InjectionLarge Language Model 2025.11.19 2025.11.21 Literature Database
ForgeDAN: An Evolutionary Framework for Jailbreaking Aligned Large Language Models Authors: Siyang Cheng, Gaotian Liu, Rui Mei, Yilin Wang, Kejia Zhang, Kaishuo Wei, Yuqi Yu, Weiping Wen, Xiaojie Wu, Junhua Liu | Published: 2025-11-17 Prompt InjectionLarge Language ModelEvolutionary Algorithm 2025.11.17 2025.11.19 Literature Database
SGuard-v1: Safety Guardrail for Large Language Models Authors: JoonHo Lee, HyeonMin Cho, Jaewoong Yun, Hyunjae Lee, JunKyu Lee, Juree Seok | Published: 2025-11-16 Prompt InjectionMalicious PromptAdaptive Misuse Detection 2025.11.16 2025.11.18 Literature Database
SeedAIchemy: LLM-Driven Seed Corpus Generation for Fuzzing Authors: Aidan Wen, Norah A. Alzahrani, Jingzhi Jiang, Andrew Joe, Karen Shieh, Andy Zhang, Basel Alomair, David Wagner | Published: 2025-11-16 バグ検出手法Prompt InjectionInformation Security 2025.11.16 2025.11.18 Literature Database