Jatmo: Prompt Injection Defense by Task-Specific Finetuning Authors: Julien Piet, Maha Alrashed, Chawin Sitawarin, Sizhe Chen, Zeming Wei, Elizabeth Sun, Basel Alomair, David Wagner | Published: 2023-12-29 | Updated: 2024-01-08 LLM SecurityCyber AttackPrompt Injection 2023.12.29 2025.05.27 Literature Database
SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security Authors: Zefang Liu | Published: 2023-12-26 LLM Performance EvaluationCybersecurityPrompt Injection 2023.12.26 2025.05.27 Literature Database
ChatGPT, Llama, can you write my report? An experiment on assisted digital forensics reports written using (Local) Large Language Models Authors: Gaëtan Michelet, Frank Breitinger | Published: 2023-12-22 Forensic ReportPrompt Injection 2023.12.22 2025.05.27 Literature Database
MetaAID 2.5: A Secure Framework for Developing Metaverse Applications via Large Language Models Authors: Hongyin Zhu | Published: 2023-12-22 LLM SecurityData GenerationPrompt Injection 2023.12.22 2025.05.27 Literature Database
HW-V2W-Map: Hardware Vulnerability to Weakness Mapping Framework for Root Cause Analysis with GPT-assisted Mitigation Suggestion Authors: Yu-Zheng Lin, Muntasir Mamun, Muhtasim Alam Chowdhury, Shuyu Cai, Mingyu Zhu, Banafsheh Saber Latibari, Kevin Immanuel Gubbi, Najmeh Nazari Bavarsad, Arjun Caputo, Avesta Sasan, Houman Homayoun, Setareh Rafatirad, Pratik Satam, Soheil Salehi | Published: 2023-12-21 CVE Information ExtractionPrompt InjectionVulnerability Management 2023.12.21 2025.05.27 Literature Database
A Comprehensive Survey of Attack Techniques, Implementation, and Mitigation Strategies in Large Language Models Authors: Aysan Esmradi, Daniel Wankit Yip, Chun Fai Chan | Published: 2023-12-18 Cyber AttackPrompt InjectionAttack Method 2023.12.18 2025.05.27 Literature Database
JailGuard: A Universal Detection Framework for LLM Prompt-based Attacks Authors: Xiaoyu Zhang, Cen Zhang, Tianlin Li, Yihao Huang, Xiaojun Jia, Ming Hu, Jie Zhang, Yang Liu, Shiqing Ma, Chao Shen | Published: 2023-12-17 | Updated: 2025-03-15 Text Perturbation MethodPrompt InjectionAttack Method 2023.12.17 2025.05.27 Literature Database
Silent Guardian: Protecting Text from Malicious Exploitation by Large Language Models Authors: Jiawei Zhao, Kejiang Chen, Xiaojian Yuan, Yuang Qi, Weiming Zhang, Nenghai Yu | Published: 2023-12-15 | Updated: 2024-10-10 Privacy Protection MethodPrompt InjectionWatermark Evaluation 2023.12.15 2025.05.27 Literature Database
Binary Code Summarization: Benchmarking ChatGPT/GPT-4 and Other Large Language Models Authors: Xin Jin, Jonathan Larson, Weiwei Yang, Zhiqiang Lin | Published: 2023-12-15 LLM Performance EvaluationProgram AnalysisPrompt Injection 2023.12.15 2025.05.27 Literature Database
Maatphor: Automated Variant Analysis for Prompt Injection Attacks Authors: Ahmed Salem, Andrew Paverd, Boris Köpf | Published: 2023-12-12 LLM SecurityPrompt InjectionEvaluation Method 2023.12.12 2025.05.27 Literature Database