On Protecting the Data Privacy of Large Language Models (LLMs): A Survey Authors: Biwei Yan, Kun Li, Minghui Xu, Yueyan Dong, Yue Zhang, Zhaochun Ren, Xiuzhen Cheng | Published: 2024-03-08 | Updated: 2024-03-14 Backdoor AttackPrivacy Protection MethodPrompt Injection 2024.03.08 2025.05.27 Literature Database
Do You Trust Your Model? Emerging Malware Threats in the Deep Learning Ecosystem Authors: Dorjan Hitaj, Giulio Pagnotta, Fabio De Gaspari, Sediola Ruko, Briland Hitaj, Luigi V. Mancini, Fernando Perez-Cruz | Published: 2024-03-06 | Updated: 2025-05-13 Prompt InjectionMalware ClassificationFederated Learning 2024.03.06 2025.05.27 Literature Database
Catch’em all: Classification of Rare, Prominent, and Novel Malware Families Authors: Maksim E. Eren, Ryan Barron, Manish Bhattarai, Selma Wanna, Nicholas Solovyev, Kim Rasmussen, Boian S. Alexandrov, Charles Nicholas | Published: 2024-03-04 Class ImbalancePrompt InjectionMalware Classification 2024.03.04 2025.05.27 Literature Database
KnowPhish: Large Language Models Meet Multimodal Knowledge Graphs for Enhancing Reference-Based Phishing Detection Authors: Yuexin Li, Chengyu Huang, Shumin Deng, Mei Lin Lock, Tri Cao, Nay Oo, Hoon Wei Lim, Bryan Hooi | Published: 2024-03-04 | Updated: 2024-06-15 Phishing DetectionBrand Recognition ProblemPrompt Injection 2024.03.04 2025.05.27 Literature Database
Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks Authors: Sayedeh Leila Noorbakhsh, Binghui Zhang, Yuan Hong, Binghui Wang | Published: 2024-03-04 Privacy Protection MethodPrompt InjectionMembership Inference 2024.03.04 2025.05.27 Literature Database
Using LLMs for Tabletop Exercises within the Security Domain Authors: Sam Hays, Jules White | Published: 2024-03-03 CybersecurityTabletop Exercise ChallengesPrompt Injection 2024.03.03 2025.05.27 Literature Database
AutoAttacker: A Large Language Model Guided System to Implement Automatic Cyber-attacks Authors: Jiacen Xu, Jack W. Stokes, Geoff McDonald, Xuesong Bai, David Marshall, Siyue Wang, Adith Swaminathan, Zhou Li | Published: 2024-03-02 LLM SecurityPrompt InjectionAttack Method 2024.03.02 2025.05.27 Literature Database
Teach LLMs to Phish: Stealing Private Information from Language Models Authors: Ashwinee Panda, Christopher A. Choquette-Choo, Zhengming Zhang, Yaoqing Yang, Prateek Mittal | Published: 2024-03-01 Backdoor AttackPhishing DetectionPrompt Injection 2024.03.01 2025.05.27 Literature Database
PRSA: PRompt Stealing Attacks against Large Language Models Authors: Yong Yang, Changjiang Li, Yi Jiang, Xi Chen, Haoyu Wang, Xuhong Zhang, Zonghui Wang, Shouling Ji | Published: 2024-02-29 | Updated: 2024-06-08 LLM Performance EvaluationPrompt InjectionPrompt Engineering 2024.02.29 2025.05.27 Literature Database
Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise and Reconstruction Authors: Tong Liu, Yingjie Zhang, Zhe Zhao, Yinpeng Dong, Guozhu Meng, Kai Chen | Published: 2024-02-28 | Updated: 2024-06-10 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.28 2025.05.27 Literature Database