Can Federated Learning Safeguard Private Data in LLM Training? Vulnerabilities, Attacks, and Defense Evaluation Authors: Wenkai Guo, Xuefeng Liu, Haolin Wang, Jianwei Niu, Shaojie Tang, Jing Yuan | Published: 2025-09-25 Privacy Protection MethodPrompt InjectionPoisoning 2025.09.25 2025.09.27 Literature Database
A Framework for Rapidly Developing and Deploying Protection Against Large Language Model Attacks Authors: Adam Swanda, Amy Chang, Alexander Chen, Fraser Burch, Paul Kassianik, Konstantin Berlin | Published: 2025-09-25 Indirect Prompt InjectionSecurity MetricPrompt Injection 2025.09.25 2025.09.27 Literature Database
Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation Authors: Tharcisse Ndayipfukamiye, Jianguo Ding, Doreen Sebastian Sarwatt, Adamu Gaston Philipo, Huansheng Ning | Published: 2025-09-24 | Updated: 2025-09-30 Prompt InjectionCertified RobustnessDefense Mechanism 2025.09.24 2025.10.02 Literature Database
bi-GRPO: Bidirectional Optimization for Jailbreak Backdoor Injection on LLMs Authors: Wence Ji, Jiancan Wu, Aiying Li, Shuyi Zhang, Junkang Wu, An Zhang, Xiang Wang, Xiangnan He | Published: 2025-09-24 Disabling Safety Mechanisms of LLMPrompt InjectionGenerative Model 2025.09.24 2025.09.26 Literature Database
LLMs as verification oracles for Solidity Authors: Massimo Bartoletti, Enrico Lipparini, Livio Pompianu | Published: 2025-09-23 Prompt InjectionModel DoSVulnerability Assessment Method 2025.09.23 2025.09.25 Literature Database
LLM-based Vulnerability Discovery through the Lens of Code Metrics Authors: Felix Weissberg, Lukas Pirch, Erik Imgrund, Jonas Möller, Thorsten Eisenhofer, Konrad Rieck | Published: 2025-09-23 コードメトリクス評価Prompt InjectionLarge Language Model 2025.09.23 2025.09.25 Literature Database
LLM-Driven SAST-Genius: A Hybrid Static Analysis Framework for Comprehensive and Actionable Security Authors: Vaibhav Agrawal, Kiarash Ahi | Published: 2025-09-18 | Updated: 2025-09-23 Prompt InjectionVulnerability Assessment MethodStatic Analysis 2025.09.18 2025.09.25 Literature Database
Evil Vizier: Vulnerabilities of LLM-Integrated XR Systems Authors: Yicheng Zhang, Zijian Huang, Sophie Chen, Erfan Shayegani, Jiasi Chen, Nael Abu-Ghazaleh | Published: 2025-09-18 Security AnalysisPrompt InjectionAttack Action Model 2025.09.18 2025.09.20 Literature Database
Beyond Surface Alignment: Rebuilding LLMs Safety Mechanism via Probabilistically Ablating Refusal Direction Authors: Yuanbo Xie, Yingjie Zhang, Tianyun Liu, Duohe Ma, Tingwen Liu | Published: 2025-09-18 Prompt InjectionSafety Alignment拒否メカニズム 2025.09.18 2025.09.20 Literature Database
A Multi-Agent LLM Defense Pipeline Against Prompt Injection Attacks Authors: S M Asif Hossain, Ruksat Khan Shayoni, Mohd Ruhul Ameen, Akif Islam, M. F. Mridha, Jungpil Shin | Published: 2025-09-16 | Updated: 2025-10-01 Indirect Prompt InjectionPrompt InjectionDecentralized LLM Architecture 2025.09.16 2025.10.03 Literature Database