Prompt leaking

Chasing Shadows: Pitfalls in LLM Security Research

Authors: Jonathan Evertz, Niklas Risse, Nicolai Neuer, Andreas Müller, Philipp Normann, Gaetano Sapia, Srishti Gupta, David Pape, Soumya Shaw, Devansh Srivastav, Christian Wressnegger, Erwin Quiring, Thorsten Eisenhofer, Daniel Arp, Lea Schönherr | Published: 2025-12-10
Prompt Injection
Prompt leaking

In-Context Representation Hijacking

Authors: Itay Yona, Amir Sarid, Michael Karasik, Yossi Gandelsman | Published: 2025-12-03
Cybersecurity
Prompt Injection
Prompt leaking

CryptoQA: A Large-scale Question-answering Dataset for AI-assisted Cryptography

Authors: Mayar Elfares, Pascal Reisert, Tilman Dietz, Manpa Barman, Ahmed Zaki, Ralf Küsters, Andreas Bulling | Published: 2025-12-02
Dataset Generation
Prompt Injection
Prompt leaking

COGNITION: From Evaluation to Defense against Multimodal LLM CAPTCHA Solvers

Authors: Junyu Wang, Changjia Zhu, Yuanbo Zhou, Lingyao Li, Xu He, Junjie Xiong | Published: 2025-12-02
Prompt leaking
Model Performance Evaluation
Model Extraction Attack

LLM-CSEC: Empirical Evaluation of Security in C/C++ Code Generated by Large Language Models

Authors: Muhammad Usman Shahid, Chuadhry Mujeeb Ahmed, Rajiv Ranjan | Published: 2025-11-24
Automation of Cybersecurity
Prompt leaking
Risk Assessment Method

RoguePrompt: Dual-Layer Ciphering for Self-Reconstruction to Circumvent LLM Moderation

Authors: Benyamin Tafreshian | Published: 2025-11-24
Indirect Prompt Injection
Prompt leaking
Malicious Prompt

Q-MLLM: Vector Quantization for Robust Multimodal Large Language Model Security

Authors: Wei Zhao, Zhe Li, Yige Li, Jun Sun | Published: 2025-11-20
Prompt leaking
Robustness Improvement Method
Digital Watermarking for Generative AI

PSM: Prompt Sensitivity Minimization via LLM-Guided Black-Box Optimization

Authors: Huseein Jawad, Nicolas Brunel | Published: 2025-11-20
Privacy-Preserving Data Mining
Prompt leaking
Malicious Prompt

Taxonomy, Evaluation and Exploitation of IPI-Centric LLM Agent Defense Frameworks

Authors: Zimo Ji, Xunguang Wang, Zongjie Li, Pingchuan Ma, Yudong Gao, Daoyuan Wu, Xincheng Yan, Tian Tian, Shuai Wang | Published: 2025-11-19
Indirect Prompt Injection
Prompt leaking
Adaptive Misuse Detection

TZ-LLM: Protecting On-Device Large Language Models with Arm TrustZone

Authors: Xunjie Wang, Jiacheng Shi, Zihan Zhao, Yang Yu, Zhichao Hua, Jinyu Gu | Published: 2025-11-17
Prompt leaking
Model DoS
Performance Evaluation Metrics