Does Johnny Get the Message? Evaluating Cybersecurity Notifications for Everyday Users Authors: Victor Jüttner, Erik Buchmann | Published: 2025-05-28 パーソナライズPrompt Injection対策の説明 2025.05.28 2025.05.30 Literature Database
Test-Time Immunization: A Universal Defense Framework Against Jailbreaks for (Multimodal) Large Language Models Authors: Yongcan Yu, Yanbo Wang, Ran He, Jian Liang | Published: 2025-05-28 LLM SecurityPrompt InjectionLarge Language Model 2025.05.28 2025.05.30 Literature Database
Jailbreak Distillation: Renewable Safety Benchmarking Authors: Jingyu Zhang, Ahmed Elgohary, Xiawei Wang, A S M Iftekhar, Ahmed Magooda, Benjamin Van Durme, Daniel Khashabi, Kyle Jackson | Published: 2025-05-28 Prompt InjectionModel EvaluationAttack Evaluation 2025.05.28 2025.05.30 Literature Database
Breaking the Ceiling: Exploring the Potential of Jailbreak Attacks through Expanding Strategy Space Authors: Yao Huang, Yitong Sun, Shouwei Ruan, Yichi Zhang, Yinpeng Dong, Xingxing Wei | Published: 2025-05-27 Disabling Safety Mechanisms of LLMPrompt InjectionAttack Evaluation 2025.05.27 2025.05.29 Literature Database
JavaSith: A Client-Side Framework for Analyzing Potentially Malicious Extensions in Browsers, VS Code, and NPM Packages Authors: Avihay Cohen | Published: 2025-05-27 API SecurityClient-Side DefensePrompt Injection 2025.05.27 2025.05.29 Literature Database
TrojanStego: Your Language Model Can Secretly Be A Steganographic Privacy Leaking Agent Authors: Dominik Meier, Jan Philip Wahle, Paul Röttger, Terry Ruas, Bela Gipp | Published: 2025-05-26 Prompt InjectionModel Extraction AttackWatermarking Technology 2025.05.26 2025.05.28 Literature Database
What Really Matters in Many-Shot Attacks? An Empirical Study of Long-Context Vulnerabilities in LLMs Authors: Sangyeop Kim, Yohan Lee, Yongwoo Song, Kimin Lee | Published: 2025-05-26 Prompt InjectionModel Performance EvaluationLarge Language Model 2025.05.26 2025.05.28 Literature Database
Invisible Prompts, Visible Threats: Malicious Font Injection in External Resources for Large Language Models Authors: Junjie Xiong, Changjia Zhu, Shuhang Lin, Chong Zhang, Yongfeng Zhang, Yao Liu, Lingyao Li | Published: 2025-05-22 LLM SecurityDisabling Safety Mechanisms of LLMPrompt Injection 2025.05.22 2025.05.28 Literature Database
CAIN: Hijacking LLM-Humans Conversations via a Two-Stage Malicious System Prompt Generation and Refining Framework Authors: Viet Pham, Thai Le | Published: 2025-05-22 LLM SecurityPrompt InjectionAdversarial Learning 2025.05.22 2025.05.28 Literature Database
When Safety Detectors Aren’t Enough: A Stealthy and Effective Jailbreak Attack on LLMs via Steganographic Techniques Authors: Jianing Geng, Biao Yi, Zekun Fei, Tongxi Wu, Lihai Nie, Zheli Liu | Published: 2025-05-22 Disabling Safety Mechanisms of LLMPrompt InjectionWatermark Removal Technology 2025.05.22 2025.05.28 Literature Database