Odysseus: Jailbreaking Commercial Multimodal LLM-integrated Systems via Dual Steganography Authors: Songze Li, Jiameng Cheng, Yiming Li, Xiaojun Jia, Dacheng Tao | Published: 2025-12-23 Disabling Safety Mechanisms of LLMPrompt Injectionマルチモーダル安全性 2025.12.23 2025.12.25 Literature Database
On the Effectiveness of Instruction-Tuning Local LLMs for Identifying Software Vulnerabilities Authors: Sangryu Park, Gihyuk Ko, Homook Cho | Published: 2025-12-23 Prompt InjectionLarge Language ModelVulnerability Analysis 2025.12.23 2025.12.25 Literature Database
GShield: Mitigating Poisoning Attacks in Federated Learning Authors: Sameera K. M., Serena Nicolazzo, Antonino Nocera, Vinod P., Rafidha Rehiman K. A | Published: 2025-12-22 データ毒性攻撃Prompt InjectionPoisoning 2025.12.22 2025.12.24 Literature Database
Efficient Jailbreak Mitigation Using Semantic Linear Classification in a Multi-Staged Pipeline Authors: Akshaj Prashanth Rao, Advait Singh, Saumya Kumaar Saksena, Dhruv Kumar | Published: 2025-12-22 Prompt InjectionWatermarkDefense Mechanism 2025.12.22 2025.12.24 Literature Database
Prefix Probing: Lightweight Harmful Content Detection for Large Language Models Authors: Jirui Yang, Hengqi Guo, Zhihui Lu, Yi Zhao, Yuansen Zhang, Shijing Hu, Qiang Duan, Yinggui Wang, Tao Wei | Published: 2025-12-18 Token Distribution AnalysisPrompt InjectionPrompt leaking 2025.12.18 2025.12.20 Literature Database
A Systematic Study of Code Obfuscation Against LLM-based Vulnerability Detection Authors: Xiao Li, Yue Li, Hao Wu, Yue Zhang, Yechao Zhang, Fengyuan Xu, Sheng Zhong | Published: 2025-12-18 Indirect Prompt InjectionPrompt Injection難読化手法 2025.12.18 2025.12.20 Literature Database
Quantifying Return on Security Controls in LLM Systems Authors: Richard Helder Moulton, Austin O'Brien, John D. Hastings | Published: 2025-12-17 Prompt InjectionRisk Analysis Method脆弱性検出手法 2025.12.17 2025.12.19 Literature Database
PentestEval: Benchmarking LLM-based Penetration Testing with Modular and Stage-Level Design Authors: Ruozhao Yang, Mingfei Cheng, Gelei Deng, Tianwei Zhang, Junjie Wang, Xiaofei Xie | Published: 2025-12-16 Indirect Prompt InjectionPrompt InjectionVulnerability Management 2025.12.16 2025.12.18 Literature Database
FlipLLM: Efficient Bit-Flip Attacks on Multimodal LLMs using Reinforcement Learning Authors: Khurram Khalil, Khaza Anuarul Hoque | Published: 2025-12-10 Prompt InjectionLarge Language ModelVulnerability Assessment Method 2025.12.10 2025.12.12 Literature Database
Chasing Shadows: Pitfalls in LLM Security Research Authors: Jonathan Evertz, Niklas Risse, Nicolai Neuer, Andreas Müller, Philipp Normann, Gaetano Sapia, Srishti Gupta, David Pape, Soumya Shaw, Devansh Srivastav, Christian Wressnegger, Erwin Quiring, Thorsten Eisenhofer, Daniel Arp, Lea Schönherr | Published: 2025-12-10 Prompt InjectionPrompt leaking 2025.12.10 2025.12.12 Literature Database