Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM Authors: Bochuan Cao, Yuanpu Cao, Lu Lin, Jinghui Chen | Published: 2023-09-18 | Updated: 2024-06-12 Prompt InjectionSafety AlignmentDefense Method 2023.09.18 2025.05.28 Literature Database
FuzzLLM: A Novel and Universal Fuzzing Framework for Proactively Discovering Jailbreak Vulnerabilities in Large Language Models Authors: Dongyu Yao, Jianshu Zhang, Ian G. Harris, Marcel Carlsson | Published: 2023-09-11 | Updated: 2024-04-14 LLM SecurityWatermarkingPrompt Injection 2023.09.11 2025.05.28 Literature Database
Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review Authors: Zhenyong Zhang, Mengxiang Liu, Mingyang Sun, Ruilong Deng, Peng Cheng, Dusit Niyato, Mo-Yuen Chow, Jiming Chen | Published: 2023-08-30 | Updated: 2023-12-25 Energy ManagementPrompt InjectionAdversarial Training 2023.08.30 2025.05.28 Literature Database
Detecting Language Model Attacks with Perplexity Authors: Gabriel Alon, Michael Kamfonas | Published: 2023-08-27 | Updated: 2023-11-07 LLM SecurityPrompt InjectionMalicious Prompt 2023.08.27 2025.05.28 Literature Database
Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and Vulnerabilities Authors: Maximilian Mozes, Xuanli He, Bennett Kleinberg, Lewis D. Griffin | Published: 2023-08-24 Prompt InjectionMalicious Content GenerationAdversarial Example 2023.08.24 2025.05.28 Literature Database
Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models Authors: Fredrik Heiding, Bruce Schneier, Arun Vishwanath, Jeremy Bernstein, Peter S. Park | Published: 2023-08-23 | Updated: 2023-11-30 PhishingPhishing AttackPrompt Injection 2023.08.23 2025.05.28 Literature Database
Time Travel in LLMs: Tracing Data Contamination in Large Language Models Authors: Shahriar Golchin, Mihai Surdeanu | Published: 2023-08-16 | Updated: 2024-02-21 Data Contamination DetectionPrompt InjectionNatural Language Processing 2023.08.16 2025.05.28 Literature Database
Robustness Over Time: Understanding Adversarial Examples’ Effectiveness on Longitudinal Versions of Large Language Models Authors: Yugeng Liu, Tianshuo Cong, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang | Published: 2023-08-15 | Updated: 2024-05-06 Prompt InjectionModel Performance EvaluationRobustness Evaluation 2023.08.15 2025.05.28 Literature Database
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool Authors: Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | Published: 2023-08-13 | Updated: 2024-06-02 Prompt InjectionPenetration Testing MethodsPerformance Evaluation 2023.08.13 2025.05.28 Literature Database
An Empirical Study on Using Large Language Models to Analyze Software Supply Chain Security Failures Authors: Tanmay Singla, Dharun Anandayuvaraj, Kelechi G. Kalu, Taylor R. Schorlemmer, James C. Davis | Published: 2023-08-09 Cyber AttackPrompt InjectionModel Performance Evaluation 2023.08.09 2025.05.28 Literature Database