Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models Authors: Fredrik Heiding, Bruce Schneier, Arun Vishwanath, Jeremy Bernstein, Peter S. Park | Published: 2023-08-23 | Updated: 2023-11-30 PhishingPhishing AttackPrompt Injection 2023.08.23 2025.05.28 Literature Database
Time Travel in LLMs: Tracing Data Contamination in Large Language Models Authors: Shahriar Golchin, Mihai Surdeanu | Published: 2023-08-16 | Updated: 2024-02-21 Data Contamination DetectionPrompt InjectionNatural Language Processing 2023.08.16 2025.05.28 Literature Database
Robustness Over Time: Understanding Adversarial Examples’ Effectiveness on Longitudinal Versions of Large Language Models Authors: Yugeng Liu, Tianshuo Cong, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang | Published: 2023-08-15 | Updated: 2024-05-06 Prompt InjectionModel Performance EvaluationRobustness Evaluation 2023.08.15 2025.05.28 Literature Database
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool Authors: Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | Published: 2023-08-13 | Updated: 2024-06-02 Prompt InjectionPenetration Testing MethodsPerformance Evaluation 2023.08.13 2025.05.28 Literature Database
An Empirical Study on Using Large Language Models to Analyze Software Supply Chain Security Failures Authors: Tanmay Singla, Dharun Anandayuvaraj, Kelechi G. Kalu, Taylor R. Schorlemmer, James C. Davis | Published: 2023-08-09 Cyber AttackPrompt InjectionModel Performance Evaluation 2023.08.09 2025.05.28 Literature Database
“Do Anything Now”: Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models Authors: Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, Yang Zhang | Published: 2023-08-07 | Updated: 2024-05-15 LLM SecurityCharacter Role ActingPrompt Injection 2023.08.07 2025.05.28 Literature Database
Mondrian: Prompt Abstraction Attack Against Large Language Models for Cheaper API Pricing Authors: Wai Man Si, Michael Backes, Yang Zhang | Published: 2023-08-07 WatermarkingPrompt InjectionChallenges of Generative Models 2023.08.07 2025.05.28 Literature Database
PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification Authors: Hongwei Yao, Jian Lou, Kui Ren, Zhan Qin | Published: 2023-08-05 | Updated: 2023-11-28 Soft Prompt OptimizationPrompt InjectionWatermark Robustness 2023.08.05 2025.05.28 Literature Database
Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection Authors: Jun Yan, Vikas Yadav, Shiyang Li, Lichang Chen, Zheng Tang, Hai Wang, Vijay Srinivasan, Xiang Ren, Hongxia Jin | Published: 2023-07-31 | Updated: 2024-04-03 LLM SecuritySystem Prompt GenerationPrompt Injection 2023.07.31 2025.05.28 Literature Database
Universal and Transferable Adversarial Attacks on Aligned Language Models Authors: Andy Zou, Zifan Wang, Nicholas Carlini, Milad Nasr, J. Zico Kolter, Matt Fredrikson | Published: 2023-07-27 | Updated: 2023-12-20 LLM SecurityPrompt InjectionInappropriate Content Generation 2023.07.27 2025.05.28 Literature Database