EquaCode: A Multi-Strategy Jailbreak Approach for Large Language Models via Equation Solving and Code Completion Authors: Zhen Liang, Hai Huang, Zhengkui Chen | Published: 2025-12-29 Disabling Safety Mechanisms of LLMLLM活用Prompt Injection 2025.12.29 2025.12.31 Literature Database
Casting a SPELL: Sentence Pairing Exploration for LLM Limitation-breaking Authors: Yifan Huang, Xiaojun Jia, Wenbo Guo, Yuqiang Sun, Yihao Huang, Chong Wang, Yang Liu | Published: 2025-12-24 Data Selection StrategyPrompt InjectionAdversarial Attack Detection 2025.12.24 2025.12.26 Literature Database
AegisAgent: An Autonomous Defense Agent Against Prompt Injection Attacks in LLM-HARs Authors: Yihan Wang, Huanqi Yang, Shantanu Pal, Weitao Xu | Published: 2025-12-24 Indirect Prompt InjectionPrompt InjectionAdversarial Attack Assessment 2025.12.24 2025.12.26 Literature Database
Odysseus: Jailbreaking Commercial Multimodal LLM-integrated Systems via Dual Steganography Authors: Songze Li, Jiameng Cheng, Yiming Li, Xiaojun Jia, Dacheng Tao | Published: 2025-12-23 Disabling Safety Mechanisms of LLMPrompt Injectionマルチモーダル安全性 2025.12.23 2025.12.25 Literature Database
On the Effectiveness of Instruction-Tuning Local LLMs for Identifying Software Vulnerabilities Authors: Sangryu Park, Gihyuk Ko, Homook Cho | Published: 2025-12-23 Prompt InjectionLarge Language ModelVulnerability Analysis 2025.12.23 2025.12.25 Literature Database
GShield: Mitigating Poisoning Attacks in Federated Learning Authors: Sameera K. M., Serena Nicolazzo, Antonino Nocera, Vinod P., Rafidha Rehiman K. A | Published: 2025-12-22 データ毒性攻撃Prompt InjectionPoisoning 2025.12.22 2025.12.24 Literature Database
Efficient Jailbreak Mitigation Using Semantic Linear Classification in a Multi-Staged Pipeline Authors: Akshaj Prashanth Rao, Advait Singh, Saumya Kumaar Saksena, Dhruv Kumar | Published: 2025-12-22 Prompt InjectionWatermarkDefense Mechanism 2025.12.22 2025.12.24 Literature Database
Prefix Probing: Lightweight Harmful Content Detection for Large Language Models Authors: Jirui Yang, Hengqi Guo, Zhihui Lu, Yi Zhao, Yuansen Zhang, Shijing Hu, Qiang Duan, Yinggui Wang, Tao Wei | Published: 2025-12-18 Token Distribution AnalysisPrompt InjectionPrompt leaking 2025.12.18 2025.12.20 Literature Database
A Systematic Study of Code Obfuscation Against LLM-based Vulnerability Detection Authors: Xiao Li, Yue Li, Hao Wu, Yue Zhang, Yechao Zhang, Fengyuan Xu, Sheng Zhong | Published: 2025-12-18 Indirect Prompt InjectionPrompt Injection難読化手法 2025.12.18 2025.12.20 Literature Database
Quantifying Return on Security Controls in LLM Systems Authors: Richard Helder Moulton, Austin O'Brien, John D. Hastings | Published: 2025-12-17 Prompt InjectionRisk Analysis Method脆弱性検出手法 2025.12.17 2025.12.19 Literature Database