Prompt Injection

Human-Centered Privacy Research in the Age of Large Language Models

Authors: Tianshi Li, Sauvik Das, Hao-Ping Lee, Dakuo Wang, Bingsheng Yao, Zhiping Zhang | Published: 2024-02-03
Privacy Protection
Prompt Injection
Human-Centered Approach

Ocassionally Secure: A Comparative Analysis of Code Generation Assistants

Authors: Ran Elgedawy, John Sadik, Senjuti Dutta, Anuj Gautam, Konstantinos Georgiou, Farzin Gholamrezae, Fujiao Ji, Kyungchan Lim, Qian Liu, Scott Ruoti | Published: 2024-02-01
LLM Performance Evaluation
Code Generation
Prompt Injection

A Cross-Language Investigation into Jailbreak Attacks in Large Language Models

Authors: Jie Li, Yi Liu, Chongyang Liu, Ling Shi, Xiaoning Ren, Yaowen Zheng, Yang Liu, Yinxing Xue | Published: 2024-01-30
Character Role Acting
Prompt Injection
Multilingual LLM Jailbreak

LLM4Vuln: A Unified Evaluation Framework for Decoupling and Enhancing LLMs’ Vulnerability Reasoning

Authors: Yuqiang Sun, Daoyuan Wu, Yue Xue, Han Liu, Wei Ma, Lyuye Zhang, Yang Liu, Yingjiu Li | Published: 2024-01-29 | Updated: 2025-01-13
LLM Performance Evaluation
Prompt Injection
Vulnerability Management

Evaluation of LLM Chatbots for OSINT-based Cyber Threat Awareness

Authors: Samaneh Shafee, Alysson Bessani, Pedro M. Ferreira | Published: 2024-01-26 | Updated: 2024-04-19
LLM Performance Evaluation
Cybersecurity
Prompt Injection

PsySafe: A Comprehensive Framework for Psychological-based Attack, Defense, and Evaluation of Multi-agent System Safety

Authors: Zaibin Zhang, Yongting Zhang, Lijun Li, Hongzhi Gao, Lijun Wang, Huchuan Lu, Feng Zhao, Yu Qiao, Jing Shao | Published: 2024-01-22 | Updated: 2024-08-20
Prompt Injection
Safety Alignment
Psychological Manipulation

BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models

Authors: Zhen Xiang, Fengqing Jiang, Zidi Xiong, Bhaskar Ramasubramanian, Radha Poovendran, Bo Li | Published: 2024-01-20
LLM Performance Evaluation
Backdoor Attack
Prompt Injection

Vulnerabilities of Foundation Model Integrated Federated Learning Under Adversarial Threats

Authors: Chen Wu, Xi Li, Jiaqi Wang | Published: 2024-01-18 | Updated: 2024-04-02
Prompt Injection
Poisoning
Federated Learning

Excuse me, sir? Your language model is leaking (information)

Authors: Or Zamir | Published: 2024-01-18
Watermarking
Prompt Injection
Dynamic Error Correction Code

Lateral Phishing With Large Language Models: A Large Organization Comparative Study

Authors: Mazal Bethany, Athanasios Galiopoulos, Emet Bethany, Mohammad Bahrami Karkevandi, Nicole Beebe, Nishant Vishwamitra, Peyman Najafirad | Published: 2024-01-18 | Updated: 2025-04-15
Phishing Attack
Prompt Injection