LLM Security

Improved Techniques for Optimization-Based Jailbreaking on Large Language Models

Authors: Xiaojun Jia, Tianyu Pang, Chao Du, Yihao Huang, Jindong Gu, Yang Liu, Xiaochun Cao, Min Lin | Published: 2024-05-31 | Updated: 2024-06-05
LLM Security
Watermarking
Prompt Injection

Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems

Authors: Ruochen Jiao, Shaoyuan Xie, Justin Yue, Takami Sato, Lixu Wang, Yixuan Wang, Qi Alfred Chen, Qi Zhu | Published: 2024-05-27 | Updated: 2025-04-30
LLM Security
Backdoor Attack
Prompt Injection

Visual-RolePlay: Universal Jailbreak Attack on MultiModal Large Language Models via Role-playing Image Character

Authors: Siyuan Ma, Weidi Luo, Yu Wang, Xiaogeng Liu | Published: 2024-05-25 | Updated: 2024-06-12
LLM Security
Prompt Injection
Attack Method

A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions

Authors: Mohammed Hassanin, Nour Moustafa | Published: 2024-05-23
LLM Security
Cybersecurity
Prompt Injection

Learnable Privacy Neurons Localization in Language Models

Authors: Ruizhe Chen, Tianxiang Hu, Yang Feng, Zuozhu Liu | Published: 2024-05-16
LLM Security
Privacy Protection Method
Membership Inference

Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM

Authors: Xikang Yang, Xuehai Tang, Songlin Hu, Jizhong Han | Published: 2024-05-09
LLM Security
Prompt Injection
Attack Method

Special Characters Attack: Toward Scalable Training Data Extraction From Large Language Models

Authors: Yang Bai, Ge Pei, Jindong Gu, Yong Yang, Xingjun Ma | Published: 2024-05-09 | Updated: 2024-05-20
LLM Security
Watermarking
Weapon Ownership

PLLM-CS: Pre-trained Large Language Model (LLM) for Cyber Threat Detection in Satellite Networks

Authors: Mohammed Hassanin, Marwa Keshk, Sara Salim, Majid Alsubaie, Dharmendra Sharma | Published: 2024-05-09
LLM Security
Cybersecurity
Anomaly Detection Method

Large Language Models for Cyber Security: A Systematic Literature Review

Authors: Hanxiang Xu, Shenao Wang, Ningke Li, Kailong Wang, Yanjie Zhao, Kai Chen, Ting Yu, Yang Liu, Haoyu Wang | Published: 2024-05-08 | Updated: 2025-05-15
LLM Security
Indirect Prompt Injection
文献レビュー

LLM Security Guard for Code

Authors: Arya Kavian, Mohammad Mehdi Pourhashem Kallehbasti, Sajjad Kazemi, Ehsan Firouzi, Mohammad Ghafari | Published: 2024-05-02 | Updated: 2024-05-03
LLM Security
Security Analysis
Prompt Injection