ChatIDS: Explainable Cybersecurity Using Generative AI Authors: Victor Jüttner, Martin Grimmer, Erik Buchmann | Published: 2023-06-26 Online Safety AdvicePrompt InjectionExpert Opinion Collection 2023.06.26 2025.05.28 Literature Database
On the Uses of Large Language Models to Interpret Ambiguous Cyberattack Descriptions Authors: Reza Fayyazi, Shanchieh Jay Yang | Published: 2023-06-24 | Updated: 2023-08-22 Prompt InjectionMalware ClassificationNatural Language Processing 2023.06.24 2025.05.28 Literature Database
Visual Adversarial Examples Jailbreak Aligned Large Language Models Authors: Xiangyu Qi, Kaixuan Huang, Ashwinee Panda, Peter Henderson, Mengdi Wang, Prateek Mittal | Published: 2023-06-22 | Updated: 2023-08-16 Prompt InjectionInappropriate Content GenerationAdversarial attack 2023.06.22 2025.05.28 Literature Database
Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models Authors: Myles Foley, Ambrish Rawat, Taesung Lee, Yufang Hou, Gabriele Picco, Giulio Zizzo | Published: 2023-06-15 LLM Performance EvaluationAlgorithmPrompt Injection 2023.06.15 2025.05.28 Literature Database
Augmenting Greybox Fuzzing with Generative AI Authors: Jie Hu, Qian Zhang, Heng Yin | Published: 2023-06-11 FuzzingPrompt InjectionPerformance Evaluation 2023.06.11 2025.05.28 Literature Database
Prompt Injection attack against LLM-integrated Applications Authors: Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Zihao Wang, Xiaofeng Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu | Published: 2023-06-08 | Updated: 2024-03-02 Prompt InjectionMalicious Prompt 2023.06.08 2025.05.28 Literature Database
On the Detectability of ChatGPT Content: Benchmarking, Methodology, and Evaluation through the Lens of Academic Writing Authors: Zeyan Liu, Zijun Yao, Fengjun Li, Bo Luo | Published: 2023-06-07 | Updated: 2024-03-18 LLM ApplicationPrompt InjectionLiterature List 2023.06.07 2025.05.28 Literature Database
On Evaluating Adversarial Robustness of Large Vision-Language Models Authors: Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Chongxuan Li, Ngai-Man Cheung, Min Lin | Published: 2023-05-26 | Updated: 2023-10-29 LLM Performance EvaluationPrompt InjectionAdversarial attack 2023.05.26 2025.05.28 Literature Database
Spear Phishing With Large Language Models Authors: Julian Hazell | Published: 2023-05-11 | Updated: 2023-12-22 Cyber AttackPhishing AttackPrompt Injection 2023.05.11 2025.05.28 Literature Database
In ChatGPT We Trust? Measuring and Characterizing the Reliability of ChatGPT Authors: Xinyue Shen, Zeyuan Chen, Michael Backes, Yang Zhang | Published: 2023-04-18 | Updated: 2023-10-05 LLM SecurityPrompt InjectionUser Experience Evaluation 2023.04.18 2025.05.28 Literature Database