Prompt leaking

Large Language Models powered Network Attack Detection: Architecture, Opportunities and Case Study

Authors: Xinggong Zhang, Qingyang Li, Yunpeng Tan, Zongming Guo, Lei Zhang, Yong Cui | Published: 2025-03-24
Prompt Injection
Prompt leaking
Intrusion Detection System

Prada: Black-Box LLM Adaptation with Private Data on Resource-Constrained Devices

Authors: Ziyao Wang, Yexiao He, Zheyu Shen, Yu Li, Guoheng Sun, Myungjin Lee, Ang Li | Published: 2025-03-19
Privacy Protection Method
Prompt leaking
Deep Learning

Temporal Context Awareness: A Defense Framework Against Multi-turn Manipulation Attacks on Large Language Models

Authors: Prashant Kulkarni, Assaf Namer | Published: 2025-03-18
Prompt Injection
Prompt leaking
Attack Method

VeriLeaky: Navigating IP Protection vs Utility in Fine-Tuning for LLM-Driven Verilog Coding

Authors: Zeng Wang, Minghao Shao, Mohammed Nabeel, Prithwish Basu Roy, Likhitha Mankali, Jitendra Bhandari, Ramesh Karri, Ozgur Sinanoglu, Muhammad Shafique, Johann Knechtel | Published: 2025-03-17 | Updated: 2025-04-14
Data Protection Method
Privacy Protection Method
Prompt leaking

VeriContaminated: Assessing LLM-Driven Verilog Coding for Data Contamination

Authors: Zeng Wang, Minghao Shao, Jitendra Bhandari, Likhitha Mankali, Ramesh Karri, Ozgur Sinanoglu, Muhammad Shafique, Johann Knechtel | Published: 2025-03-17 | Updated: 2025-04-14
FPGA
Program Interpretation Graph
Prompt leaking

CASTLE: Benchmarking Dataset for Static Code Analyzers and LLMs towards CWE Detection

Authors: Richard A. Dubniczky, Krisztofer Zoltán Horvát, Tamás Bisztray, Mohamed Amine Ferrag, Lucas C. Cordeiro, Norbert Tihanyi | Published: 2025-03-12 | Updated: 2025-03-31
Security Metric
Prompt leaking
Vulnerability Mitigation Technique

Cyber Defense Reinvented: Large Language Models as Threat Intelligence Copilots

Authors: Xiaoqun Liu, Jiacheng Liang, Qiben Yan, Jiyong Jang, Sicheng Mao, Muchao Ye, Jinyuan Jia, Zhaohan Xi | Published: 2025-02-28 | Updated: 2025-04-16
Cyber Threat Intelligence
Prompt leaking
Model Extraction Attack

Unveiling Privacy Risks in LLM Agent Memory

Authors: Bo Wang, Weiyi He, Shenglai Zeng, Zhen Xiang, Yue Xing, Jiliang Tang, Pengfei He | Published: 2025-02-17 | Updated: 2025-06-03
Privacy Analysis
Prompt leaking
Causes of Information Leakage

QueryAttack: Jailbreaking Aligned Large Language Models Using Structured Non-natural Query Language

Authors: Qingsong Zou, Jingyu Xiao, Qing Li, Zhi Yan, Yuhang Wang, Li Xu, Wenxuan Wang, Kuofeng Gao, Ruoyu Li, Yong Jiang | Published: 2025-02-13 | Updated: 2025-05-26
Disabling Safety Mechanisms of LLM
Prompt leaking
教育的分析

Trustworthy AI: Safety, Bias, and Privacy — A Survey

Authors: Xingli Fang, Jianwei Li, Varun Mulchandani, Jung-Eun Kim | Published: 2025-02-11 | Updated: 2025-06-11
Bias
Prompt leaking
Differential Privacy