Human-Centered Privacy Research in the Age of Large Language Models Authors: Tianshi Li, Sauvik Das, Hao-Ping Lee, Dakuo Wang, Bingsheng Yao, Zhiping Zhang | Published: 2024-02-03 Privacy ProtectionPrompt InjectionHuman-Centered Approach 2024.02.03 2025.05.27 Literature Database
Ocassionally Secure: A Comparative Analysis of Code Generation Assistants Authors: Ran Elgedawy, John Sadik, Senjuti Dutta, Anuj Gautam, Konstantinos Georgiou, Farzin Gholamrezae, Fujiao Ji, Kyungchan Lim, Qian Liu, Scott Ruoti | Published: 2024-02-01 LLM Performance EvaluationCode GenerationPrompt Injection 2024.02.01 2025.05.27 Literature Database
A Cross-Language Investigation into Jailbreak Attacks in Large Language Models Authors: Jie Li, Yi Liu, Chongyang Liu, Ling Shi, Xiaoning Ren, Yaowen Zheng, Yang Liu, Yinxing Xue | Published: 2024-01-30 Character Role ActingPrompt InjectionMultilingual LLM Jailbreak 2024.01.30 2025.05.27 Literature Database
LLM4Vuln: A Unified Evaluation Framework for Decoupling and Enhancing LLMs’ Vulnerability Reasoning Authors: Yuqiang Sun, Daoyuan Wu, Yue Xue, Han Liu, Wei Ma, Lyuye Zhang, Yang Liu, Yingjiu Li | Published: 2024-01-29 | Updated: 2025-01-13 LLM Performance EvaluationPrompt InjectionVulnerability Management 2024.01.29 2025.05.27 Literature Database
Evaluation of LLM Chatbots for OSINT-based Cyber Threat Awareness Authors: Samaneh Shafee, Alysson Bessani, Pedro M. Ferreira | Published: 2024-01-26 | Updated: 2024-04-19 LLM Performance EvaluationCybersecurityPrompt Injection 2024.01.26 2025.05.27 Literature Database
PsySafe: A Comprehensive Framework for Psychological-based Attack, Defense, and Evaluation of Multi-agent System Safety Authors: Zaibin Zhang, Yongting Zhang, Lijun Li, Hongzhi Gao, Lijun Wang, Huchuan Lu, Feng Zhao, Yu Qiao, Jing Shao | Published: 2024-01-22 | Updated: 2024-08-20 Prompt InjectionSafety AlignmentPsychological Manipulation 2024.01.22 2025.05.27 Literature Database
BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models Authors: Zhen Xiang, Fengqing Jiang, Zidi Xiong, Bhaskar Ramasubramanian, Radha Poovendran, Bo Li | Published: 2024-01-20 LLM Performance EvaluationBackdoor AttackPrompt Injection 2024.01.20 2025.05.27 Literature Database
Vulnerabilities of Foundation Model Integrated Federated Learning Under Adversarial Threats Authors: Chen Wu, Xi Li, Jiaqi Wang | Published: 2024-01-18 | Updated: 2024-04-02 Prompt InjectionPoisoningFederated Learning 2024.01.18 2025.05.27 Literature Database
Excuse me, sir? Your language model is leaking (information) Authors: Or Zamir | Published: 2024-01-18 WatermarkingPrompt InjectionDynamic Error Correction Code 2024.01.18 2025.05.27 Literature Database
Lateral Phishing With Large Language Models: A Large Organization Comparative Study Authors: Mazal Bethany, Athanasios Galiopoulos, Emet Bethany, Mohammad Bahrami Karkevandi, Nicole Beebe, Nishant Vishwamitra, Peyman Najafirad | Published: 2024-01-18 | Updated: 2025-04-15 Phishing AttackPrompt Injection 2024.01.18 2025.05.27 Literature Database