Prompt Injection

A Semantic Invariant Robust Watermark for Large Language Models

Authors: Aiwei Liu, Leyi Pan, Xuming Hu, Shiao Meng, Lijie Wen | Published: 2023-10-10 | Updated: 2024-05-19
Watermarking
Prompt Injection
Performance Evaluation

SCAR: Power Side-Channel Analysis at RTL-Level

Authors: Amisha Srivastava, Sanjay Das, Navnil Choudhury, Rafail Psiakis, Pedro Henrique Silva, Debjit Pal, Kanad Basu | Published: 2023-10-10
Prompt Injection
Cryptography
Vulnerability Prediction

LLM for SoC Security: A Paradigm Shift

Authors: Dipayan Saha, Shams Tarek, Katayoon Yahyaei, Sujan Kumar Saha, Jingbo Zhou, Mark Tehranipoor, Farimah Farahmandi | Published: 2023-10-09
LLM Application
Prompt Injection
Vulnerability detection

Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!

Authors: Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, Peter Henderson | Published: 2023-10-05
Data Collection
Prompt Injection
Information Gathering Methods

SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks

Authors: Alexander Robey, Eric Wong, Hamed Hassani, George J. Pappas | Published: 2023-10-05 | Updated: 2024-06-11
LLM Performance Evaluation
Prompt Injection
Defense Method

Misusing Tools in Large Language Models With Visual Adversarial Examples

Authors: Xiaohan Fu, Zihan Wang, Shuheng Li, Rajesh K. Gupta, Niloofar Mireshghallah, Taylor Berg-Kirkpatrick, Earlence Fernandes | Published: 2023-10-04
LLM Performance Evaluation
Prompt Injection
Adversarial Example

Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models

Authors: Xianjun Yang, Xiao Wang, Qi Zhang, Linda Petzold, William Yang Wang, Xun Zhao, Dahua Lin | Published: 2023-10-04
Prompt Injection
Safety Alignment
Malicious Content Generation

Low-Resource Languages Jailbreak GPT-4

Authors: Zheng-Xin Yong, Cristina Menghini, Stephen H. Bach | Published: 2023-10-03 | Updated: 2024-01-27
Prompt Injection
Safety Alignment
Vulnerability detection

Jailbreaker in Jail: Moving Target Defense for Large Language Models

Authors: Bocheng Chen, Advait Paliwal, Qiben Yan | Published: 2023-10-03
LLM Performance Evaluation
Prompt Injection
evaluation metrics

On the Safety of Open-Sourced Large Language Models: Does Alignment Really Prevent Them From Being Misused?

Authors: Hangfan Zhang, Zhimeng Guo, Huaisheng Zhu, Bochuan Cao, Lu Lin, Jinyuan Jia, Jinghui Chen, Dinghao Wu | Published: 2023-10-02
LLM Performance Evaluation
Prompt Injection
Classification of Malicious Actors