Prompt Injection

Token-Level Adversarial Prompt Detection Based on Perplexity Measures and Contextual Information

Authors: Zhengmian Hu, Gang Wu, Saayan Mitra, Ruiyi Zhang, Tong Sun, Heng Huang, Viswanathan Swaminathan | Published: 2023-11-20 | Updated: 2024-02-18
Prompt Injection
Prompt validation
Robustness Evaluation

Bergeron: Combating Adversarial Attacks through a Conscience-Based Alignment Framework

Authors: Matthew Pisano, Peter Ly, Abraham Sanders, Bingsheng Yao, Dakuo Wang, Tomek Strzalkowski, Mei Si | Published: 2023-11-16 | Updated: 2024-08-18
Prompt Injection
Multilingual LLM Jailbreak
Adversarial attack

Stealthy and Persistent Unalignment on Large Language Models via Backdoor Injections

Authors: Yuanpu Cao, Bochuan Cao, Jinghui Chen | Published: 2023-11-15 | Updated: 2024-06-09
Backdoor Attack
Prompt Injection

Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment

Authors: Haoran Wang, Kai Shu | Published: 2023-11-15 | Updated: 2024-08-15
Prompt Injection
Attack Method
Natural Language Processing

Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts

Authors: Yuanwei Wu, Xiang Li, Yixin Liu, Pan Zhou, Lichao Sun | Published: 2023-11-15 | Updated: 2024-01-20
Prompt Injection
Attack Method
Face Recognition

A Robust Semantics-based Watermark for Large Language Model against Paraphrasing

Authors: Jie Ren, Han Xu, Yiding Liu, Yingqian Cui, Shuaiqiang Wang, Dawei Yin, Jiliang Tang | Published: 2023-11-15 | Updated: 2024-04-01
Prompt Injection
Robustness Evaluation
Information Hiding Techniques

DEMASQ: Unmasking the ChatGPT Wordsmith

Authors: Kavita Kumari, Alessandro Pegoraro, Hossein Fereidooni, Ahmad-Reza Sadeghi | Published: 2023-11-08
Energy-Based Model
Prompt Injection
Evaluation Method

Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications

Authors: Fengqing Jiang, Zhangchen Xu, Luyao Niu, Boxin Wang, Jinyuan Jia, Bo Li, Radha Poovendran | Published: 2023-11-07 | Updated: 2023-11-29
Prompt Injection
Experimental Validation
Attack Method

ELEGANT: Certified Defense on the Fairness of Graph Neural Networks

Authors: Yushun Dong, Binchi Zhang, Hanghang Tong, Jundong Li | Published: 2023-11-05
Graph Neural Network
Bias Mitigation Techniques
Prompt Injection

Comprehensive Assessment of Toxicity in ChatGPT

Authors: Boyang Zhang, Xinyue Shen, Wai Man Si, Zeyang Sha, Zeyuan Chen, Ahmed Salem, Yun Shen, Michael Backes, Yang Zhang | Published: 2023-11-03
Abuse of AI Chatbots
Prompt Injection
Inappropriate Content Generation