Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks Authors: Sayedeh Leila Noorbakhsh, Binghui Zhang, Yuan Hong, Binghui Wang | Published: 2024-03-04 Privacy Protection MethodPrompt InjectionMembership Inference 2024.03.04 2025.05.27 Literature Database
Using LLMs for Tabletop Exercises within the Security Domain Authors: Sam Hays, Jules White | Published: 2024-03-03 CybersecurityTabletop Exercise ChallengesPrompt Injection 2024.03.03 2025.05.27 Literature Database
AutoAttacker: A Large Language Model Guided System to Implement Automatic Cyber-attacks Authors: Jiacen Xu, Jack W. Stokes, Geoff McDonald, Xuesong Bai, David Marshall, Siyue Wang, Adith Swaminathan, Zhou Li | Published: 2024-03-02 LLM SecurityPrompt InjectionAttack Method 2024.03.02 2025.05.27 Literature Database
Teach LLMs to Phish: Stealing Private Information from Language Models Authors: Ashwinee Panda, Christopher A. Choquette-Choo, Zhengming Zhang, Yaoqing Yang, Prateek Mittal | Published: 2024-03-01 Backdoor AttackPhishing DetectionPrompt Injection 2024.03.01 2025.05.27 Literature Database
PRSA: PRompt Stealing Attacks against Large Language Models Authors: Yong Yang, Changjiang Li, Yi Jiang, Xi Chen, Haoyu Wang, Xuhong Zhang, Zonghui Wang, Shouling Ji | Published: 2024-02-29 | Updated: 2024-06-08 LLM Performance EvaluationPrompt InjectionPrompt Engineering 2024.02.29 2025.05.27 Literature Database
Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise and Reconstruction Authors: Tong Liu, Yingjie Zhang, Zhe Zhao, Yinpeng Dong, Guozhu Meng, Kai Chen | Published: 2024-02-28 | Updated: 2024-06-10 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.28 2025.05.27 Literature Database
ChatSpamDetector: Leveraging Large Language Models for Effective Phishing Email Detection Authors: Takashi Koide, Naoki Fukushi, Hiroki Nakano, Daiki Chiba | Published: 2024-02-28 | Updated: 2024-08-23 Phishing DetectionPrompt InjectionEmail Security 2024.02.28 2025.05.27 Literature Database
Token-Specific Watermarking with Enhanced Detectability and Semantic Coherence for Large Language Models Authors: Mingjia Huo, Sai Ashish Somayajula, Youwei Liang, Ruisi Zhang, Farinaz Koushanfar, Pengtao Xie | Published: 2024-02-28 | Updated: 2024-06-06 WatermarkingPrompt InjectionMulti-Objective Optimization 2024.02.28 2025.05.27 Literature Database
LLMs Can Defend Themselves Against Jailbreaking in a Practical Manner: A Vision Paper Authors: Daoyuan Wu, Shuai Wang, Yang Liu, Ning Liu | Published: 2024-02-24 | Updated: 2024-03-04 LLM SecurityPrompt InjectionPrompt Engineering 2024.02.24 2025.05.27 Literature Database
Coercing LLMs to do and reveal (almost) anything Authors: Jonas Geiping, Alex Stein, Manli Shu, Khalid Saifullah, Yuxin Wen, Tom Goldstein | Published: 2024-02-21 LLM SecurityPrompt InjectionAttack Method 2024.02.21 2025.05.27 Literature Database