Can LLMs get help from other LLMs without revealing private information? Authors: Florian Hartmann, Duc-Hieu Tran, Peter Kairouz, Victor Cărbune, Blaise Aguera y Arcas | Published: 2024-04-01 | Updated: 2024-04-02 LLM SecurityPrivacy ProtectionPrivacy Protection Method 2024.04.01 2025.05.27 Literature Database
To Err is Machine: Vulnerability Detection Challenges LLM Reasoning Authors: Benjamin Steenhoek, Md Mahbubur Rahman, Monoshi Kumar Roy, Mirza Sanjida Alam, Hengbo Tong, Swarna Das, Earl T. Barr, Wei Le | Published: 2024-03-25 | Updated: 2025-01-07 DoS MitigationLLM SecurityPrompt Injection 2024.03.25 2025.05.27 Literature Database
Large Language Models for Blockchain Security: A Systematic Literature Review Authors: Zheyuan He, Zihao Li, Sen Yang, He Ye, Ao Qiao, Xiaosong Zhang, Xiapu Luo, Ting Chen | Published: 2024-03-21 | Updated: 2025-03-24 LLM SecurityAlgorithmBlockchain Technology 2024.03.21 2025.05.27 Literature Database
Large language models in 6G security: challenges and opportunities Authors: Tri Nguyen, Huong Nguyen, Ahmad Ijaz, Saeid Sheikhi, Athanasios V. Vasilakos, Panos Kostakos | Published: 2024-03-18 LLM SecurityCybersecurityDecentralized LLM Architecture 2024.03.18 2025.05.27 Literature Database
What Was Your Prompt? A Remote Keylogging Attack on AI Assistants Authors: Roy Weiss, Daniel Ayzenshteyn, Guy Amit, Yisroel Mirsky | Published: 2024-03-14 LLM SecurityToken Processing and CollectionToken Collection Method 2024.03.14 2025.05.27 Literature Database
CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion Authors: Qibing Ren, Chang Gao, Jing Shao, Junchi Yan, Xin Tan, Wai Lam, Lizhuang Ma | Published: 2024-03-12 | Updated: 2024-09-14 LLM SecurityCode GenerationPrompt Injection 2024.03.12 2025.05.27 Literature Database
Fuzzing BusyBox: Leveraging LLM and Crash Reuse for Embedded Bug Unearthing Authors: Asmita, Yaroslav Oliinyk, Michael Scott, Ryan Tsang, Chongzhou Fang, Houman Homayoun | Published: 2024-03-06 LLM SecurityFuzzingInitial Seed Generation 2024.03.06 2025.05.27 Literature Database
AutoAttacker: A Large Language Model Guided System to Implement Automatic Cyber-attacks Authors: Jiacen Xu, Jack W. Stokes, Geoff McDonald, Xuesong Bai, David Marshall, Siyue Wang, Adith Swaminathan, Zhou Li | Published: 2024-03-02 LLM SecurityPrompt InjectionAttack Method 2024.03.02 2025.05.27 Literature Database
Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise and Reconstruction Authors: Tong Liu, Yingjie Zhang, Zhe Zhao, Yinpeng Dong, Guozhu Meng, Kai Chen | Published: 2024-02-28 | Updated: 2024-06-10 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.28 2025.05.27 Literature Database
LLMs Can Defend Themselves Against Jailbreaking in a Practical Manner: A Vision Paper Authors: Daoyuan Wu, Shuai Wang, Yang Liu, Ning Liu | Published: 2024-02-24 | Updated: 2024-03-04 LLM SecurityPrompt InjectionPrompt Engineering 2024.02.24 2025.05.27 Literature Database