Getting pwn’d by AI: Penetration Testing with Large Language Models Authors: Andreas Happe, Jürgen Cito | Published: 2023-07-24 | Updated: 2023-08-17 LLM SecurityPrompt InjectionPenetration Testing Methods 2023.07.24 2025.05.12 Literature Database
Privacy-Preserving Prompt Tuning for Large Language Model Services Authors: Yansong Li, Zhixing Tan, Yang Liu | Published: 2023-05-10 | Updated: 2025-01-10 DNN IP Protection MethodLLM SecurityPrivacy Assessment 2023.05.10 2025.05.12 Literature Database
In ChatGPT We Trust? Measuring and Characterizing the Reliability of ChatGPT Authors: Xinyue Shen, Zeyuan Chen, Michael Backes, Yang Zhang | Published: 2023-04-18 | Updated: 2023-10-05 LLM SecurityPrompt InjectionUser Experience Evaluation 2023.04.18 2025.05.12 Literature Database
Stochastic Parrots Looking for Stochastic Parrots: LLMs are Easy to Fine-Tune and Hard to Detect with other LLMs Authors: Da Silva Gameiro Henrique, Andrei Kucharavy, Rachid Guerraoui | Published: 2023-04-18 LLM SecurityText Generation MethodGenerative Adversarial Network 2023.04.18 2025.05.12 Literature Database
Multi-step Jailbreaking Privacy Attacks on ChatGPT Authors: Haoran Li, Dadi Guo, Wei Fan, Mingshi Xu, Jie Huang, Fanpu Meng, Yangqiu Song | Published: 2023-04-11 | Updated: 2023-11-01 LLM SecurityPrivacy AnalysisPrompt Injection 2023.04.11 2025.05.12 Literature Database