Prompt leaking

Select Me! When You Need a Tool: A Black-box Text Attack on Tool Selection

Authors: Liuji Chen, Hao Gao, Jinghao Zhang, Qiang Liu, Shu Wu, Liang Wang | Published: 2025-04-07
Prompt leaking
Information Security
Adversarial Example

Generative Large Language Model usage in Smart Contract Vulnerability Detection

Authors: Peter Ince, Jiangshan Yu, Joseph K. Liu, Xiaoning Du | Published: 2025-04-07
Prompt Injection
Prompt leaking
Vulnerability Analysis

Representation Bending for Large Language Model Safety

Authors: Ashkan Yousefpour, Taeheon Kim, Ryan S. Kwon, Seungbeen Lee, Wonje Jeung, Seungju Han, Alvin Wan, Harrison Ngan, Youngjae Yu, Jonghyun Choi | Published: 2025-04-02
Prompt Injection
Prompt leaking
Safety Alignment

THEMIS: Towards Practical Intellectual Property Protection for Post-Deployment On-Device Deep Learning Models

Authors: Yujin Huang, Zhi Zhang, Qingchuan Zhao, Xingliang Yuan, Chunyang Chen | Published: 2025-03-31
Prompt leaking
Model Protection Methods
Model Extraction Attack

Large Language Models are Unreliable for Cyber Threat Intelligence

Authors: Emanuele Mezzi, Fabio Massacci, Katja Tuma | Published: 2025-03-29 | Updated: 2025-07-16
Few-Shot Learning
Prompt leaking
Performance Evaluation Method

Large Language Models powered Network Attack Detection: Architecture, Opportunities and Case Study

Authors: Xinggong Zhang, Qingyang Li, Yunpeng Tan, Zongming Guo, Lei Zhang, Yong Cui | Published: 2025-03-24
Prompt Injection
Prompt leaking
Intrusion Detection System

Prada: Black-Box LLM Adaptation with Private Data on Resource-Constrained Devices

Authors: Ziyao Wang, Yexiao He, Zheyu Shen, Yu Li, Guoheng Sun, Myungjin Lee, Ang Li | Published: 2025-03-19
Privacy Protection Method
Prompt leaking
Deep Learning

Temporal Context Awareness: A Defense Framework Against Multi-turn Manipulation Attacks on Large Language Models

Authors: Prashant Kulkarni, Assaf Namer | Published: 2025-03-18
Prompt Injection
Prompt leaking
Attack Method

VeriLeaky: Navigating IP Protection vs Utility in Fine-Tuning for LLM-Driven Verilog Coding

Authors: Zeng Wang, Minghao Shao, Mohammed Nabeel, Prithwish Basu Roy, Likhitha Mankali, Jitendra Bhandari, Ramesh Karri, Ozgur Sinanoglu, Muhammad Shafique, Johann Knechtel | Published: 2025-03-17 | Updated: 2025-04-14
Data Protection Method
Privacy Protection Method
Prompt leaking

VeriContaminated: Assessing LLM-Driven Verilog Coding for Data Contamination

Authors: Zeng Wang, Minghao Shao, Jitendra Bhandari, Likhitha Mankali, Ramesh Karri, Ozgur Sinanoglu, Muhammad Shafique, Johann Knechtel | Published: 2025-03-17 | Updated: 2025-04-14
FPGA
Program Interpretation Graph
Prompt leaking