Prompt Injection

Digital Forgetting in Large Language Models: A Survey of Unlearning Methods

Authors: Alberto Blanco-Justicia, Najeeb Jebreel, Benet Manzanares, David Sánchez, Josep Domingo-Ferrer, Guillem Collell, Kuan Eeik Tan | Published: 2024-04-02
LLM Performance Evaluation
Prompt Injection
Machine Unlearning

What is in Your Safe Data? Identifying Benign Data that Breaks Safety

Authors: Luxi He, Mengzhou Xia, Peter Henderson | Published: 2024-04-01 | Updated: 2024-08-20
Data Selection Strategy
Prompt Injection
Psychological Manipulation

To Err is Machine: Vulnerability Detection Challenges LLM Reasoning

Authors: Benjamin Steenhoek, Md Mahbubur Rahman, Monoshi Kumar Roy, Mirza Sanjida Alam, Hengbo Tong, Swarna Das, Earl T. Barr, Wei Le | Published: 2024-03-25 | Updated: 2025-01-07
DoS Mitigation
LLM Security
Prompt Injection

Defending Against Indirect Prompt Injection Attacks With Spotlighting

Authors: Keegan Hines, Gary Lopez, Matthew Hall, Federico Zarfati, Yonatan Zunger, Emre Kiciman | Published: 2024-03-20
Indirect Prompt Injection
Prompt Injection
Malicious Prompt

Leveraging Large Language Models to Detect npm Malicious Packages

Authors: Nusrat Zahan, Philipp Burckhardt, Mikola Lysenko, Feross Aboukhadijeh, Laurie Williams | Published: 2024-03-18 | Updated: 2025-01-06
LLM Performance Evaluation
Prompt Injection
Malware Classification

Helpful or Harmful? Exploring the Efficacy of Large Language Models for Online Grooming Prevention

Authors: Ellie Prosser, Matthew Edwards | Published: 2024-03-14
LLM Performance Evaluation
Online Safety Advice
Prompt Injection

AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting

Authors: Yu Wang, Xiaogeng Liu, Yu Li, Muhao Chen, Chaowei Xiao | Published: 2024-03-14
Prompt Injection
Structural Attack
Defense Method

CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion

Authors: Qibing Ren, Chang Gao, Jing Shao, Junchi Yan, Xin Tan, Wai Lam, Lizhuang Ma | Published: 2024-03-12 | Updated: 2024-09-14
LLM Security
Code Generation
Prompt Injection

ACFIX: Guiding LLMs with Mined Common RBAC Practices for Context-Aware Repair of Access Control Vulnerabilities in Smart Contracts

Authors: Lyuye Zhang, Kaixuan Li, Kairan Sun, Daoyuan Wu, Ye Liu, Haoye Tian, Yang Liu | Published: 2024-03-11 | Updated: 2024-03-18
Smart Contract
Prompt Injection
Automated Vulnerability Remediation

DP-TabICL: In-Context Learning with Differentially Private Tabular Data

Authors: Alycia N. Carey, Karuna Bhaila, Kennedy Edemacu, Xintao Wu | Published: 2024-03-08
Few-Shot Learning
Privacy Protection Method
Prompt Injection