Prompt leaking

Temporal Context Awareness: A Defense Framework Against Multi-turn Manipulation Attacks on Large Language Models

Authors: Prashant Kulkarni, Assaf Namer | Published: 2025-03-18
Prompt Injection
Prompt leaking
Attack Method

VeriLeaky: Navigating IP Protection vs Utility in Fine-Tuning for LLM-Driven Verilog Coding

Authors: Zeng Wang, Minghao Shao, Mohammed Nabeel, Prithwish Basu Roy, Likhitha Mankali, Jitendra Bhandari, Ramesh Karri, Ozgur Sinanoglu, Muhammad Shafique, Johann Knechtel | Published: 2025-03-17 | Updated: 2025-04-14
Data Protection Method
Privacy Protection Method
Prompt leaking

VeriContaminated: Assessing LLM-Driven Verilog Coding for Data Contamination

Authors: Zeng Wang, Minghao Shao, Jitendra Bhandari, Likhitha Mankali, Ramesh Karri, Ozgur Sinanoglu, Muhammad Shafique, Johann Knechtel | Published: 2025-03-17 | Updated: 2025-04-14
FPGA
Program Interpretation Graph
Prompt leaking

CASTLE: Benchmarking Dataset for Static Code Analyzers and LLMs towards CWE Detection

Authors: Richard A. Dubniczky, Krisztofer Zoltán Horvát, Tamás Bisztray, Mohamed Amine Ferrag, Lucas C. Cordeiro, Norbert Tihanyi | Published: 2025-03-12 | Updated: 2025-03-31
Security Metric
Prompt leaking
Vulnerability Mitigation Technique

Cyber Defense Reinvented: Large Language Models as Threat Intelligence Copilots

Authors: Xiaoqun Liu, Jiacheng Liang, Qiben Yan, Jiyong Jang, Sicheng Mao, Muchao Ye, Jinyuan Jia, Zhaohan Xi | Published: 2025-02-28 | Updated: 2025-04-16
Cyber Threat Intelligence
Prompt leaking
Model Extraction Attack

Unveiling Privacy Risks in LLM Agent Memory

Authors: Bo Wang, Weiyi He, Shenglai Zeng, Zhen Xiang, Yue Xing, Jiliang Tang, Pengfei He | Published: 2025-02-17 | Updated: 2025-06-03
Privacy Analysis
Prompt leaking
Causes of Information Leakage

QueryAttack: Jailbreaking Aligned Large Language Models Using Structured Non-natural Query Language

Authors: Qingsong Zou, Jingyu Xiao, Qing Li, Zhi Yan, Yuhang Wang, Li Xu, Wenxuan Wang, Kuofeng Gao, Ruoyu Li, Yong Jiang | Published: 2025-02-13 | Updated: 2025-05-26
Disabling Safety Mechanisms of LLM
Prompt leaking
教育的分析

Trustworthy AI: Safety, Bias, and Privacy — A Survey

Authors: Xingli Fang, Jianwei Li, Varun Mulchandani, Jung-Eun Kim | Published: 2025-02-11 | Updated: 2025-06-11
Bias
Prompt leaking
Differential Privacy

Toward Intelligent and Secure Cloud: Large Language Model Empowered Proactive Defense

Authors: Yuyang Zhou, Guang Cheng, Kang Du, Zihan Chen, Yuyu Zhao | Published: 2024-12-30 | Updated: 2025-04-15
Prompt leaking
Model DoS
Information Security

From Vulnerabilities to Remediation: A Systematic Literature Review of LLMs in Code Security

Authors: Enna Basic, Alberto Giaretta | Published: 2024-12-19 | Updated: 2025-04-14
Prompt Injection
Prompt leaking
Vulnerability detection