Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM Authors: Xikang Yang, Xuehai Tang, Songlin Hu, Jizhong Han | Published: 2024-05-09 LLM SecurityPrompt InjectionAttack Method 2024.05.09 2025.05.27 Literature Database
Special Characters Attack: Toward Scalable Training Data Extraction From Large Language Models Authors: Yang Bai, Ge Pei, Jindong Gu, Yong Yang, Xingjun Ma | Published: 2024-05-09 | Updated: 2024-05-20 LLM SecurityWatermarkingWeapon Ownership 2024.05.09 2025.05.27 Literature Database
PLLM-CS: Pre-trained Large Language Model (LLM) for Cyber Threat Detection in Satellite Networks Authors: Mohammed Hassanin, Marwa Keshk, Sara Salim, Majid Alsubaie, Dharmendra Sharma | Published: 2024-05-09 LLM SecurityCybersecurityAnomaly Detection Method 2024.05.09 2025.05.27 Literature Database
Large Language Models for Cyber Security: A Systematic Literature Review Authors: Hanxiang Xu, Shenao Wang, Ningke Li, Kailong Wang, Yanjie Zhao, Kai Chen, Ting Yu, Yang Liu, Haoyu Wang | Published: 2024-05-08 | Updated: 2025-05-15 LLM SecurityIndirect Prompt Injection文献レビュー 2024.05.08 2025.05.28 Literature Database
LLM Security Guard for Code Authors: Arya Kavian, Mohammad Mehdi Pourhashem Kallehbasti, Sajjad Kazemi, Ehsan Firouzi, Mohammad Ghafari | Published: 2024-05-02 | Updated: 2024-05-03 LLM SecuritySecurity AnalysisPrompt Injection 2024.05.02 2025.05.27 Literature Database
Attacks on Third-Party APIs of Large Language Models Authors: Wanru Zhao, Vidit Khazanchi, Haodi Xing, Xuanli He, Qiongkai Xu, Nicholas Donald Lane | Published: 2024-04-24 LLM SecurityPrompt InjectionAttack Method 2024.04.24 2025.05.27 Literature Database
Watch Out for Your Guidance on Generation! Exploring Conditional Backdoor Attacks against Large Language Models Authors: Jiaming He, Wenbo Jiang, Guanyu Hou, Wenshu Fan, Rui Zhang, Hongwei Li | Published: 2024-04-23 | Updated: 2025-01-08 LLM SecurityBackdoor AttackPoisoning 2024.04.23 2025.05.27 Literature Database
Protecting Your LLMs with Information Bottleneck Authors: Zichuan Liu, Zefan Wang, Linjie Xu, Jinyu Wang, Lei Song, Tianchun Wang, Chunlin Chen, Wei Cheng, Jiang Bian | Published: 2024-04-22 | Updated: 2024-10-10 LLM SecurityPrompt InjectionCompliance with Ethical Guidelines 2024.04.22 2025.05.27 Literature Database
Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs Authors: Javier Rando, Francesco Croce, Kryštof Mitka, Stepan Shabalin, Maksym Andriushchenko, Nicolas Flammarion, Florian Tramèr | Published: 2024-04-22 | Updated: 2024-06-06 LLM SecurityBackdoor AttackPrompt Injection 2024.04.22 2025.05.27 Literature Database
AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs Authors: Anselm Paulus, Arman Zharmagambetov, Chuan Guo, Brandon Amos, Yuandong Tian | Published: 2024-04-21 LLM SecurityPrompt InjectionPrompt Engineering 2024.04.21 2025.05.27 Literature Database