Prompt Injection

Can Federated Learning Safeguard Private Data in LLM Training? Vulnerabilities, Attacks, and Defense Evaluation

Authors: Wenkai Guo, Xuefeng Liu, Haolin Wang, Jianwei Niu, Shaojie Tang, Jing Yuan | Published: 2025-09-25
Privacy Protection Method
Prompt Injection
Poisoning

A Framework for Rapidly Developing and Deploying Protection Against Large Language Model Attacks

Authors: Adam Swanda, Amy Chang, Alexander Chen, Fraser Burch, Paul Kassianik, Konstantin Berlin | Published: 2025-09-25
Indirect Prompt Injection
Security Metric
Prompt Injection

Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation

Authors: Tharcisse Ndayipfukamiye, Jianguo Ding, Doreen Sebastian Sarwatt, Adamu Gaston Philipo, Huansheng Ning | Published: 2025-09-24 | Updated: 2025-09-30
Prompt Injection
Certified Robustness
Defense Mechanism

bi-GRPO: Bidirectional Optimization for Jailbreak Backdoor Injection on LLMs

Authors: Wence Ji, Jiancan Wu, Aiying Li, Shuyi Zhang, Junkang Wu, An Zhang, Xiang Wang, Xiangnan He | Published: 2025-09-24
Disabling Safety Mechanisms of LLM
Prompt Injection
Generative Model

LLMs as verification oracles for Solidity

Authors: Massimo Bartoletti, Enrico Lipparini, Livio Pompianu | Published: 2025-09-23
Prompt Injection
Model DoS
Vulnerability Assessment Method

LLM-based Vulnerability Discovery through the Lens of Code Metrics

Authors: Felix Weissberg, Lukas Pirch, Erik Imgrund, Jonas Möller, Thorsten Eisenhofer, Konrad Rieck | Published: 2025-09-23
コードメトリクス評価
Prompt Injection
Large Language Model

LLM-Driven SAST-Genius: A Hybrid Static Analysis Framework for Comprehensive and Actionable Security

Authors: Vaibhav Agrawal, Kiarash Ahi | Published: 2025-09-18 | Updated: 2025-09-23
Prompt Injection
Vulnerability Assessment Method
Static Analysis

Evil Vizier: Vulnerabilities of LLM-Integrated XR Systems

Authors: Yicheng Zhang, Zijian Huang, Sophie Chen, Erfan Shayegani, Jiasi Chen, Nael Abu-Ghazaleh | Published: 2025-09-18
Security Analysis
Prompt Injection
Attack Action Model

Beyond Surface Alignment: Rebuilding LLMs Safety Mechanism via Probabilistically Ablating Refusal Direction

Authors: Yuanbo Xie, Yingjie Zhang, Tianyun Liu, Duohe Ma, Tingwen Liu | Published: 2025-09-18
Prompt Injection
Safety Alignment
拒否メカニズム

A Multi-Agent LLM Defense Pipeline Against Prompt Injection Attacks

Authors: S M Asif Hossain, Ruksat Khan Shayoni, Mohd Ruhul Ameen, Akif Islam, M. F. Mridha, Jungpil Shin | Published: 2025-09-16 | Updated: 2025-10-01
Indirect Prompt Injection
Prompt Injection
Decentralized LLM Architecture