LLM Security

Can LLMs get help from other LLMs without revealing private information?

Authors: Florian Hartmann, Duc-Hieu Tran, Peter Kairouz, Victor Cărbune, Blaise Aguera y Arcas | Published: 2024-04-01 | Updated: 2024-04-02
LLM Security
Privacy Protection
Privacy Protection Method

To Err is Machine: Vulnerability Detection Challenges LLM Reasoning

Authors: Benjamin Steenhoek, Md Mahbubur Rahman, Monoshi Kumar Roy, Mirza Sanjida Alam, Hengbo Tong, Swarna Das, Earl T. Barr, Wei Le | Published: 2024-03-25 | Updated: 2025-01-07
DoS Mitigation
LLM Security
Prompt Injection

Large Language Models for Blockchain Security: A Systematic Literature Review

Authors: Zheyuan He, Zihao Li, Sen Yang, He Ye, Ao Qiao, Xiaosong Zhang, Xiapu Luo, Ting Chen | Published: 2024-03-21 | Updated: 2025-03-24
LLM Security
Algorithm
Blockchain Technology

Large language models in 6G security: challenges and opportunities

Authors: Tri Nguyen, Huong Nguyen, Ahmad Ijaz, Saeid Sheikhi, Athanasios V. Vasilakos, Panos Kostakos | Published: 2024-03-18
LLM Security
Cybersecurity
Decentralized LLM Architecture

What Was Your Prompt? A Remote Keylogging Attack on AI Assistants

Authors: Roy Weiss, Daniel Ayzenshteyn, Guy Amit, Yisroel Mirsky | Published: 2024-03-14
LLM Security
Token Processing and Collection
Token Collection Method

CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion

Authors: Qibing Ren, Chang Gao, Jing Shao, Junchi Yan, Xin Tan, Wai Lam, Lizhuang Ma | Published: 2024-03-12 | Updated: 2024-09-14
LLM Security
Code Generation
Prompt Injection

Fuzzing BusyBox: Leveraging LLM and Crash Reuse for Embedded Bug Unearthing

Authors: Asmita, Yaroslav Oliinyk, Michael Scott, Ryan Tsang, Chongzhou Fang, Houman Homayoun | Published: 2024-03-06
LLM Security
Fuzzing
Initial Seed Generation

AutoAttacker: A Large Language Model Guided System to Implement Automatic Cyber-attacks

Authors: Jiacen Xu, Jack W. Stokes, Geoff McDonald, Xuesong Bai, David Marshall, Siyue Wang, Adith Swaminathan, Zhou Li | Published: 2024-03-02
LLM Security
Prompt Injection
Attack Method

Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise and Reconstruction

Authors: Tong Liu, Yingjie Zhang, Zhe Zhao, Yinpeng Dong, Guozhu Meng, Kai Chen | Published: 2024-02-28 | Updated: 2024-06-10
LLM Security
LLM Performance Evaluation
Prompt Injection

LLMs Can Defend Themselves Against Jailbreaking in a Practical Manner: A Vision Paper

Authors: Daoyuan Wu, Shuai Wang, Yang Liu, Ning Liu | Published: 2024-02-24 | Updated: 2024-03-04
LLM Security
Prompt Injection
Prompt Engineering