Prompt Injection

Attack Prompt Generation for Red Teaming and Defending Large Language Models

Authors: Boyi Deng, Wenjie Wang, Fuli Feng, Yang Deng, Qifan Wang, Xiangnan He | Published: 2023-10-19
Prompt Injection
Attack Evaluation
Adversarial Example

Large Language Models for Code Analysis: Do LLMs Really Do Their Job?

Authors: Chongzhou Fang, Ning Miao, Shaurya Srivastav, Jialin Liu, Ruoyu Zhang, Ruijie Fang, Asmita, Ryan Tsang, Najmeh Nazari, Han Wang, Houman Homayoun | Published: 2023-10-18 | Updated: 2024-03-05
Dataset Generation
Program Analysis
Prompt Injection

Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks

Authors: Erfan Shayegani, Md Abdullah Al Mamun, Yu Fu, Pedram Zaree, Yue Dong, Nael Abu-Ghazaleh | Published: 2023-10-16
Prompt Injection
Adversarial Example
Adversarial Training

Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation

Authors: Yangsibo Huang, Samyak Gupta, Mengzhou Xia, Kai Li, Danqi Chen | Published: 2023-10-10
Prompt Injection
Attack Evaluation
Adversarial attack

LLMs Killed the Script Kiddie: How Agents Supported by Large Language Models Change the Landscape of Network Threat Testing

Authors: Stephen Moskal, Sam Laney, Erik Hemberg, Una-May O'Reilly | Published: 2023-10-10
Prompt Injection
Information Gathering Methods
Threat Actor Support

A Semantic Invariant Robust Watermark for Large Language Models

Authors: Aiwei Liu, Leyi Pan, Xuming Hu, Shiao Meng, Lijie Wen | Published: 2023-10-10 | Updated: 2024-05-19
Watermarking
Prompt Injection
Performance Evaluation

SCAR: Power Side-Channel Analysis at RTL-Level

Authors: Amisha Srivastava, Sanjay Das, Navnil Choudhury, Rafail Psiakis, Pedro Henrique Silva, Debjit Pal, Kanad Basu | Published: 2023-10-10
Prompt Injection
Cryptography
Vulnerability Prediction

LLM for SoC Security: A Paradigm Shift

Authors: Dipayan Saha, Shams Tarek, Katayoon Yahyaei, Sujan Kumar Saha, Jingbo Zhou, Mark Tehranipoor, Farimah Farahmandi | Published: 2023-10-09
LLM Application
Prompt Injection
Vulnerability detection

Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!

Authors: Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, Peter Henderson | Published: 2023-10-05
Data Collection
Prompt Injection
Information Gathering Methods

SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks

Authors: Alexander Robey, Eric Wong, Hamed Hassani, George J. Pappas | Published: 2023-10-05 | Updated: 2024-06-11
LLM Performance Evaluation
Prompt Injection
Defense Method