LLM Security

Signed-Prompt: A New Approach to Prevent Prompt Injection Attacks Against LLM-Integrated Applications

Authors: Xuchen Suo | Published: 2024-01-15
LLM Security
Prompt Injection

Detection and Defense Against Prominent Attacks on Preconditioned LLM-Integrated Virtual Assistants

Authors: Chun Fai Chan, Daniel Wankit Yip, Aysan Esmradi | Published: 2024-01-02
LLM Security
Character Role Acting
System Prompt Generation

A Novel Evaluation Framework for Assessing Resilience Against Prompt Injection Attacks in Large Language Models

Authors: Daniel Wankit Yip, Aysan Esmradi, Chun Fai Chan | Published: 2024-01-02
LLM Security
Prompt Injection
Attack Evaluation

Jatmo: Prompt Injection Defense by Task-Specific Finetuning

Authors: Julien Piet, Maha Alrashed, Chawin Sitawarin, Sizhe Chen, Zeming Wei, Elizabeth Sun, Basel Alomair, David Wagner | Published: 2023-12-29 | Updated: 2024-01-08
LLM Security
Cyber Attack
Prompt Injection

MetaAID 2.5: A Secure Framework for Developing Metaverse Applications via Large Language Models

Authors: Hongyin Zhu | Published: 2023-12-22
LLM Security
Data Generation
Prompt Injection

No-Skim: Towards Efficiency Robustness Evaluation on Skimming-based Language Models

Authors: Shengyao Zhang, Mi Zhang, Xudong Pan, Min Yang | Published: 2023-12-15 | Updated: 2023-12-18
Evolution of AI
LLM Security
Watermarking

Maatphor: Automated Variant Analysis for Prompt Injection Attacks

Authors: Ahmed Salem, Andrew Paverd, Boris Köpf | Published: 2023-12-12
LLM Security
Prompt Injection
Evaluation Method

Make Them Spill the Beans! Coercive Knowledge Extraction from (Production) LLMs

Authors: Zhuo Zhang, Guangyu Shen, Guanhong Tao, Siyuan Cheng, Xiangyu Zhang | Published: 2023-12-08
LLM Security
Prompt Injection
Inappropriate Content Generation

Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks

Authors: Shuli Jiang, Swanand Ravindra Kadhe, Yi Zhou, Ling Cai, Nathalie Baracaldo | Published: 2023-12-07
LLM Security
Poisoning Attack
Model Performance Evaluation

DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions

Authors: Fangzhou Wu, Xiaogeng Liu, Chaowei Xiao | Published: 2023-12-07 | Updated: 2023-12-12
LLM Security
Code Generation
Prompt Injection