Prompt Injection

Odysseus: Jailbreaking Commercial Multimodal LLM-integrated Systems via Dual Steganography

Authors: Songze Li, Jiameng Cheng, Yiming Li, Xiaojun Jia, Dacheng Tao | Published: 2025-12-23
Disabling Safety Mechanisms of LLM
Prompt Injection
マルチモーダル安全性

On the Effectiveness of Instruction-Tuning Local LLMs for Identifying Software Vulnerabilities

Authors: Sangryu Park, Gihyuk Ko, Homook Cho | Published: 2025-12-23
Prompt Injection
Large Language Model
Vulnerability Analysis

GShield: Mitigating Poisoning Attacks in Federated Learning

Authors: Sameera K. M., Serena Nicolazzo, Antonino Nocera, Vinod P., Rafidha Rehiman K. A | Published: 2025-12-22
データ毒性攻撃
Prompt Injection
Poisoning

Efficient Jailbreak Mitigation Using Semantic Linear Classification in a Multi-Staged Pipeline

Authors: Akshaj Prashanth Rao, Advait Singh, Saumya Kumaar Saksena, Dhruv Kumar | Published: 2025-12-22
Prompt Injection
Watermark
Defense Mechanism

Prefix Probing: Lightweight Harmful Content Detection for Large Language Models

Authors: Jirui Yang, Hengqi Guo, Zhihui Lu, Yi Zhao, Yuansen Zhang, Shijing Hu, Qiang Duan, Yinggui Wang, Tao Wei | Published: 2025-12-18
Token Distribution Analysis
Prompt Injection
Prompt leaking

A Systematic Study of Code Obfuscation Against LLM-based Vulnerability Detection

Authors: Xiao Li, Yue Li, Hao Wu, Yue Zhang, Yechao Zhang, Fengyuan Xu, Sheng Zhong | Published: 2025-12-18
Indirect Prompt Injection
Prompt Injection
難読化手法

Quantifying Return on Security Controls in LLM Systems

Authors: Richard Helder Moulton, Austin O'Brien, John D. Hastings | Published: 2025-12-17
Prompt Injection
Risk Analysis Method
脆弱性検出手法

PentestEval: Benchmarking LLM-based Penetration Testing with Modular and Stage-Level Design

Authors: Ruozhao Yang, Mingfei Cheng, Gelei Deng, Tianwei Zhang, Junjie Wang, Xiaofei Xie | Published: 2025-12-16
Indirect Prompt Injection
Prompt Injection
Vulnerability Management

FlipLLM: Efficient Bit-Flip Attacks on Multimodal LLMs using Reinforcement Learning

Authors: Khurram Khalil, Khaza Anuarul Hoque | Published: 2025-12-10
Prompt Injection
Large Language Model
Vulnerability Assessment Method

Chasing Shadows: Pitfalls in LLM Security Research

Authors: Jonathan Evertz, Niklas Risse, Nicolai Neuer, Andreas Müller, Philipp Normann, Gaetano Sapia, Srishti Gupta, David Pape, Soumya Shaw, Devansh Srivastav, Christian Wressnegger, Erwin Quiring, Thorsten Eisenhofer, Daniel Arp, Lea Schönherr | Published: 2025-12-10
Prompt Injection
Prompt leaking