MIA-Tuner: Adapting Large Language Models as Pre-training Text Detector Authors: Wenjie Fu, Huandong Wang, Chen Gao, Guanghua Liu, Yong Li, Tao Jiang | Published: 2024-08-16 LLM SecurityPrompt InjectionMembership Inference 2024.08.16 2025.05.12 Literature Database
DePrompt: Desensitization and Evaluation of Personal Identifiable Information in Large Language Model Prompts Authors: Xiongtao Sun, Gan Liu, Zhipeng He, Hui Li, Xiaoguang Li | Published: 2024-08-16 LLM SecurityPrivacy Protection MethodPrompt Injection 2024.08.16 2025.05.12 Literature Database
Prefix Guidance: A Steering Wheel for Large Language Models to Defend Against Jailbreak Attacks Authors: Jiawei Zhao, Kejiang Chen, Xiaojian Yuan, Weiming Zhang | Published: 2024-08-15 | Updated: 2024-08-22 LLM SecurityPrompt InjectionDefense Method 2024.08.15 2025.05.12 Literature Database
Casper: Prompt Sanitization for Protecting User Privacy in Web-Based Large Language Models Authors: Chun Jie Chong, Chenxi Hou, Zhihao Yao, Seyed Mohammadjavad Seyed Talebi | Published: 2024-08-13 LLM SecurityPrivacy ProtectionPrompt Injection 2024.08.13 2025.05.12 Literature Database
Kov: Transferable and Naturalistic Black-Box LLM Attacks using Markov Decision Processes and Tree Search Authors: Robert J. Moss | Published: 2024-08-11 LLM SecurityPrompt InjectionCompliance with Ethical Guidelines 2024.08.11 2025.05.12 Literature Database
Towards Automatic Hands-on-Keyboard Attack Detection Using LLMs in EDR Solutions Authors: Amit Portnoy, Ehud Azikri, Shay Kels | Published: 2024-08-04 LLM SecurityEndpoint DetectionData Collection 2024.08.04 2025.05.12 Literature Database
Pathway to Secure and Trustworthy ZSM for LLMs: Attacks, Defense, and Opportunities Authors: Sunder Ali Khowaja, Parus Khuwaja, Kapal Dev, Hussam Al Hamadi, Engin Zeydan | Published: 2024-08-01 | Updated: 2025-01-06 LLM SecurityMembership InferenceTrust Evaluation Module 2024.08.01 2025.05.12 Literature Database
Jailbreaking Text-to-Image Models with LLM-Based Agents Authors: Yingkai Dong, Zheng Li, Xiangtao Meng, Ning Yu, Shanqing Guo | Published: 2024-08-01 | Updated: 2024-09-09 LLM SecurityPrompt InjectionModel Performance Evaluation 2024.08.01 2025.05.12 Literature Database
SLIP: Securing LLMs IP Using Weights Decomposition Authors: Yehonathan Refael, Adam Hakim, Lev Greenberg, Tal Aviv, Satya Lokam, Ben Fishman, Shachar Seidman | Published: 2024-07-15 | Updated: 2024-08-01 LLM SecurityWatermarkingSecure Communication Channel 2024.07.15 2025.05.12 Literature Database
TPIA: Towards Target-specific Prompt Injection Attack against Code-oriented Large Language Models Authors: Yuchen Yang, Hongwei Yao, Bingrun Yang, Yiling He, Yiming Li, Tianwei Zhang, Zhan Qin, Kui Ren, Chun Chen | Published: 2024-07-12 | Updated: 2025-01-16 LLM SecurityPrompt InjectionAttack Method 2024.07.12 2025.05.12 Literature Database