LLM Security

Safeguarding Large Language Models: A Survey

Authors: Yi Dong, Ronghui Mu, Yanghao Zhang, Siqi Sun, Tianle Zhang, Changshun Wu, Gaojie Jin, Yi Qi, Jinwei Hu, Jie Meng, Saddek Bensalem, Xiaowei Huang | Published: 2024-06-03
LLM Security
Guardrail Method
Prompt Injection

PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration

Authors: Ziqian Zeng, Jianwei Wang, Junyao Yang, Zhengdong Lu, Haoran Li, Huiping Zhuang, Cen Chen | Published: 2024-06-03 | Updated: 2025-05-28
LLM Security
Privacy Classification
Differential Privacy

BELLS: A Framework Towards Future Proof Benchmarks for the Evaluation of LLM Safeguards

Authors: Diego Dorn, Alexandre Variengien, Charbel-Raphaël Segerie, Vincent Corruble | Published: 2024-06-03
LLM Security
Content Moderation
Prompt Injection

Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models

Authors: Garrett Crumrine, Izzat Alsmadi, Jesus Guerrero, Yuvaraj Munian | Published: 2024-06-02
LLM Security
Cybersecurity
Compliance with Ethical Guidelines

Exploring Vulnerabilities and Protections in Large Language Models: A Survey

Authors: Frank Weizhen Liu, Chenhui Hu | Published: 2024-06-01
LLM Security
Prompt Injection
Defense Method

Improved Techniques for Optimization-Based Jailbreaking on Large Language Models

Authors: Xiaojun Jia, Tianyu Pang, Chao Du, Yihao Huang, Jindong Gu, Yang Liu, Xiaochun Cao, Min Lin | Published: 2024-05-31 | Updated: 2024-06-05
LLM Security
Watermarking
Prompt Injection

Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems

Authors: Ruochen Jiao, Shaoyuan Xie, Justin Yue, Takami Sato, Lixu Wang, Yixuan Wang, Qi Alfred Chen, Qi Zhu | Published: 2024-05-27 | Updated: 2025-04-30
LLM Security
Backdoor Attack
Prompt Injection

Visual-RolePlay: Universal Jailbreak Attack on MultiModal Large Language Models via Role-playing Image Character

Authors: Siyuan Ma, Weidi Luo, Yu Wang, Xiaogeng Liu | Published: 2024-05-25 | Updated: 2024-06-12
LLM Security
Prompt Injection
Attack Method

A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions

Authors: Mohammed Hassanin, Nour Moustafa | Published: 2024-05-23
LLM Security
Cybersecurity
Prompt Injection

Learnable Privacy Neurons Localization in Language Models

Authors: Ruizhe Chen, Tianxiang Hu, Yang Feng, Zuozhu Liu | Published: 2024-05-16
LLM Security
Privacy Protection Method
Membership Inference