LLM Agents can Autonomously Hack Websites Authors: Richard Fang, Rohan Bindu, Akul Gupta, Qiusi Zhan, Daniel Kang | Published: 2024-02-06 | Updated: 2024-02-16 Website VulnerabilityCyber AttackPrompt Injection 2024.02.06 2025.05.27 Literature Database
Detecting Scams Using Large Language Models Authors: Liming Jiang | Published: 2024-02-05 LLM SecurityPhishing DetectionPrompt Injection 2024.02.05 2025.05.27 Literature Database
Reconstruct Your Previous Conversations! Comprehensively Investigating Privacy Leakage Risks in Conversations with GPT Models Authors: Junjie Chu, Zeyang Sha, Michael Backes, Yang Zhang | Published: 2024-02-05 | Updated: 2024-10-07 Privacy ProtectionPrompt InjectionMalicious Prompt 2024.02.05 2025.05.27 Literature Database
Adversarial Text Purification: A Large Language Model Approach for Defense Authors: Raha Moraffah, Shubh Khandelwal, Amrita Bhattacharjee, Huan Liu | Published: 2024-02-05 Text Generation MethodPrompt InjectionAdversarial Text Purification 2024.02.05 2025.05.27 Literature Database
Jailbreaking Attack against Multimodal Large Language Model Authors: Zhenxing Niu, Haodong Ren, Xinbo Gao, Gang Hua, Rong Jin | Published: 2024-02-04 Prompt InjectionMalicious Content GenerationInformation Gathering Methods 2024.02.04 2025.05.27 Literature Database
Human-Centered Privacy Research in the Age of Large Language Models Authors: Tianshi Li, Sauvik Das, Hao-Ping Lee, Dakuo Wang, Bingsheng Yao, Zhiping Zhang | Published: 2024-02-03 Privacy ProtectionPrompt InjectionHuman-Centered Approach 2024.02.03 2025.05.27 Literature Database
Ocassionally Secure: A Comparative Analysis of Code Generation Assistants Authors: Ran Elgedawy, John Sadik, Senjuti Dutta, Anuj Gautam, Konstantinos Georgiou, Farzin Gholamrezae, Fujiao Ji, Kyungchan Lim, Qian Liu, Scott Ruoti | Published: 2024-02-01 LLM Performance EvaluationCode GenerationPrompt Injection 2024.02.01 2025.05.27 Literature Database
A Cross-Language Investigation into Jailbreak Attacks in Large Language Models Authors: Jie Li, Yi Liu, Chongyang Liu, Ling Shi, Xiaoning Ren, Yaowen Zheng, Yang Liu, Yinxing Xue | Published: 2024-01-30 Character Role ActingPrompt InjectionMultilingual LLM Jailbreak 2024.01.30 2025.05.27 Literature Database
LLM4Vuln: A Unified Evaluation Framework for Decoupling and Enhancing LLMs’ Vulnerability Reasoning Authors: Yuqiang Sun, Daoyuan Wu, Yue Xue, Han Liu, Wei Ma, Lyuye Zhang, Yang Liu, Yingjiu Li | Published: 2024-01-29 | Updated: 2025-01-13 LLM Performance EvaluationPrompt InjectionVulnerability Management 2024.01.29 2025.05.27 Literature Database
Evaluation of LLM Chatbots for OSINT-based Cyber Threat Awareness Authors: Samaneh Shafee, Alysson Bessani, Pedro M. Ferreira | Published: 2024-01-26 | Updated: 2024-04-19 LLM Performance EvaluationCybersecurityPrompt Injection 2024.01.26 2025.05.27 Literature Database