LLM Security

Detection and Defense Against Prominent Attacks on Preconditioned LLM-Integrated Virtual Assistants

Authors: Chun Fai Chan, Daniel Wankit Yip, Aysan Esmradi | Published: 2024-01-02
LLM Security
Character Role Acting
System Prompt Generation

A Novel Evaluation Framework for Assessing Resilience Against Prompt Injection Attacks in Large Language Models

Authors: Daniel Wankit Yip, Aysan Esmradi, Chun Fai Chan | Published: 2024-01-02
LLM Security
Prompt Injection
Attack Evaluation

Jatmo: Prompt Injection Defense by Task-Specific Finetuning

Authors: Julien Piet, Maha Alrashed, Chawin Sitawarin, Sizhe Chen, Zeming Wei, Elizabeth Sun, Basel Alomair, David Wagner | Published: 2023-12-29 | Updated: 2024-01-08
LLM Security
Cyber Attack
Prompt Injection

MetaAID 2.5: A Secure Framework for Developing Metaverse Applications via Large Language Models

Authors: Hongyin Zhu | Published: 2023-12-22
LLM Security
Data Generation
Prompt Injection

No-Skim: Towards Efficiency Robustness Evaluation on Skimming-based Language Models

Authors: Shengyao Zhang, Mi Zhang, Xudong Pan, Min Yang | Published: 2023-12-15 | Updated: 2023-12-18
Evolution of AI
LLM Security
Watermarking

Maatphor: Automated Variant Analysis for Prompt Injection Attacks

Authors: Ahmed Salem, Andrew Paverd, Boris Köpf | Published: 2023-12-12
LLM Security
Prompt Injection
Evaluation Method

Make Them Spill the Beans! Coercive Knowledge Extraction from (Production) LLMs

Authors: Zhuo Zhang, Guangyu Shen, Guanhong Tao, Siyuan Cheng, Xiangyu Zhang | Published: 2023-12-08
LLM Security
Prompt Injection
Inappropriate Content Generation

Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks

Authors: Shuli Jiang, Swanand Ravindra Kadhe, Yi Zhou, Ling Cai, Nathalie Baracaldo | Published: 2023-12-07
LLM Security
Poisoning Attack
Model Performance Evaluation

DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions

Authors: Fangzhou Wu, Xiaogeng Liu, Chaowei Xiao | Published: 2023-12-07 | Updated: 2023-12-12
LLM Security
Code Generation
Prompt Injection

Purple Llama CyberSecEval: A Secure Coding Benchmark for Language Models

Authors: Manish Bhatt, Sahana Chennabasappa, Cyrus Nikolaidis, Shengye Wan, Ivan Evtimov, Dominik Gabi, Daniel Song, Faizan Ahmad, Cornelius Aschermann, Lorenzo Fontana, Sasha Frolov, Ravi Prakash Giri, Dhaval Kapil, Yiannis Kozyrakis, David LeBlanc, James Milazzo, Aleksandar Straumann, Gabriel Synnaeve, Varun Vontimitta, Spencer Whitman, Joshua Saxe | Published: 2023-12-07
LLM Security
Cybersecurity
Prompt Injection