Prompt Injection

Vulnerabilities of Foundation Model Integrated Federated Learning Under Adversarial Threats

Authors: Chen Wu, Xi Li, Jiaqi Wang | Published: 2024-01-18 | Updated: 2024-04-02
Prompt Injection
Poisoning
Federated Learning

Excuse me, sir? Your language model is leaking (information)

Authors: Or Zamir | Published: 2024-01-18
Watermarking
Prompt Injection
Dynamic Error Correction Code

Lateral Phishing With Large Language Models: A Large Organization Comparative Study

Authors: Mazal Bethany, Athanasios Galiopoulos, Emet Bethany, Mohammad Bahrami Karkevandi, Nicole Beebe, Nishant Vishwamitra, Peyman Najafirad | Published: 2024-01-18 | Updated: 2025-04-15
Phishing Attack
Prompt Injection

Signed-Prompt: A New Approach to Prevent Prompt Injection Attacks Against LLM-Integrated Applications

Authors: Xuchen Suo | Published: 2024-01-15
LLM Security
Prompt Injection

Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning

Authors: Shuai Zhao, Meihuizi Jia, Luu Anh Tuan, Fengjun Pan, Jinming Wen | Published: 2024-01-11 | Updated: 2024-10-09
Backdoor Attack
Prompt Injection

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

Authors: Evan Hubinger, Carson Denison, Jesse Mu, Mike Lambert, Meg Tong, Monte MacDiarmid, Tamera Lanham, Daniel M. Ziegler, Tim Maxwell, Newton Cheng, Adam Jermyn, Amanda Askell, Ansh Radhakrishnan, Cem Anil, David Duvenaud, Deep Ganguli, Fazl Barez, Jack Clark, Kamal Ndousse, Kshitij Sachan, Michael Sellitto, Mrinank Sharma, Nova DasSarma, Roger Grosse, Shauna Kravec, Yuntao Bai, Zachary Witten, Marina Favaro, Jan Brauner, Holden Karnofsky, Paul Christiano, Samuel R. Bowman, Logan Graham, Jared Kaplan, Sören Mindermann, Ryan Greenblatt, Buck Shlegeris, Nicholas Schiefer, Ethan Perez | Published: 2024-01-10 | Updated: 2024-01-17
Backdoor Attack
Prompt Injection
Reinforcement Learning

Malla: Demystifying Real-world Large Language Model Integrated Malicious Services

Authors: Zilong Lin, Jian Cui, Xiaojing Liao, XiaoFeng Wang | Published: 2024-01-06 | Updated: 2024-08-19
Phishing Attack
Prompt Injection
Malicious Content Generation

LLbezpeky: Leveraging Large Language Models for Vulnerability Detection

Authors: Noble Saji Mathews, Yelizaveta Brus, Yousra Aafer, Meiyappan Nagappan, Shane McIntosh | Published: 2024-01-02 | Updated: 2024-02-13
LLM Performance Evaluation
Prompt Injection
Vulnerability Management

A Novel Evaluation Framework for Assessing Resilience Against Prompt Injection Attacks in Large Language Models

Authors: Daniel Wankit Yip, Aysan Esmradi, Chun Fai Chan | Published: 2024-01-02
LLM Security
Prompt Injection
Attack Evaluation

Digger: Detecting Copyright Content Mis-usage in Large Language Model Training

Authors: Haodong Li, Gelei Deng, Yi Liu, Kailong Wang, Yuekang Li, Tianwei Zhang, Yang Liu, Guoai Xu, Guosheng Xu, Haoyu Wang | Published: 2024-01-01
LLM Performance Evaluation
Dataset Generation
Prompt Injection