Attack Prompt Generation for Red Teaming and Defending Large Language Models Authors: Boyi Deng, Wenjie Wang, Fuli Feng, Yang Deng, Qifan Wang, Xiangnan He | Published: 2023-10-19 Prompt InjectionAttack EvaluationAdversarial Example 2023.10.19 2025.05.28 Literature Database
Large Language Models for Code Analysis: Do LLMs Really Do Their Job? Authors: Chongzhou Fang, Ning Miao, Shaurya Srivastav, Jialin Liu, Ruoyu Zhang, Ruijie Fang, Asmita, Ryan Tsang, Najmeh Nazari, Han Wang, Houman Homayoun | Published: 2023-10-18 | Updated: 2024-03-05 Dataset GenerationProgram AnalysisPrompt Injection 2023.10.18 2025.05.28 Literature Database
Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks Authors: Erfan Shayegani, Md Abdullah Al Mamun, Yu Fu, Pedram Zaree, Yue Dong, Nael Abu-Ghazaleh | Published: 2023-10-16 Prompt InjectionAdversarial ExampleAdversarial Training 2023.10.16 2025.05.28 Literature Database
Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation Authors: Yangsibo Huang, Samyak Gupta, Mengzhou Xia, Kai Li, Danqi Chen | Published: 2023-10-10 Prompt InjectionAttack EvaluationAdversarial attack 2023.10.10 2025.05.28 Literature Database
LLMs Killed the Script Kiddie: How Agents Supported by Large Language Models Change the Landscape of Network Threat Testing Authors: Stephen Moskal, Sam Laney, Erik Hemberg, Una-May O'Reilly | Published: 2023-10-10 Prompt InjectionInformation Gathering MethodsThreat Actor Support 2023.10.10 2025.05.28 Literature Database
A Semantic Invariant Robust Watermark for Large Language Models Authors: Aiwei Liu, Leyi Pan, Xuming Hu, Shiao Meng, Lijie Wen | Published: 2023-10-10 | Updated: 2024-05-19 WatermarkingPrompt InjectionPerformance Evaluation 2023.10.10 2025.05.28 Literature Database
SCAR: Power Side-Channel Analysis at RTL-Level Authors: Amisha Srivastava, Sanjay Das, Navnil Choudhury, Rafail Psiakis, Pedro Henrique Silva, Debjit Pal, Kanad Basu | Published: 2023-10-10 Prompt InjectionCryptographyVulnerability Prediction 2023.10.10 2025.05.28 Literature Database
LLM for SoC Security: A Paradigm Shift Authors: Dipayan Saha, Shams Tarek, Katayoon Yahyaei, Sujan Kumar Saha, Jingbo Zhou, Mark Tehranipoor, Farimah Farahmandi | Published: 2023-10-09 LLM ApplicationPrompt InjectionVulnerability detection 2023.10.09 2025.05.28 Literature Database
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! Authors: Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, Peter Henderson | Published: 2023-10-05 Data CollectionPrompt InjectionInformation Gathering Methods 2023.10.05 2025.05.28 Literature Database
SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks Authors: Alexander Robey, Eric Wong, Hamed Hassani, George J. Pappas | Published: 2023-10-05 | Updated: 2024-06-11 LLM Performance EvaluationPrompt InjectionDefense Method 2023.10.05 2025.05.28 Literature Database