Prompt Injection

SPADE: Enhancing Adaptive Cyber Deception Strategies with Generative AI and Structured Prompt Engineering

Authors: Shihab Ahmed, A B M Mohaimenur Rahman, Md Morshed Alam, Md Sajidul Islam Sajid | Published: 2025-01-01
Cybersecurity
Prompt Injection
Prompt Engineering

SecBench: A Comprehensive Multi-Dimensional Benchmarking Dataset for LLMs in Cybersecurity

Authors: Pengfei Jing, Mengyun Tang, Xiaorong Shi, Xing Zheng, Sen Nie, Shi Wu, Yong Yang, Xiapu Luo | Published: 2024-12-30 | Updated: 2025-01-06
LLM Performance Evaluation
Cybersecurity
Prompt Injection

From Vulnerabilities to Remediation: A Systematic Literature Review of LLMs in Code Security

Authors: Enna Basic, Alberto Giaretta | Published: 2024-12-19 | Updated: 2025-04-14
Prompt Injection
Prompt leaking
Vulnerability detection

SATA: A Paradigm for LLM Jailbreak via Simple Assistive Task Linkage

Authors: Xiaoning Dong, Wenbo Hu, Wei Xu, Tianxing He | Published: 2024-12-19 | Updated: 2025-03-21
Prompt Injection
Large Language Model
Adversarial Learning

Safeguarding System Prompts for LLMs

Authors: Zhifeng Jiang, Zhihua Jin, Guoliang He | Published: 2024-12-18 | Updated: 2025-01-09
LLM Performance Evaluation
Prompt Injection
Defense Method

Can LLM Prompting Serve as a Proxy for Static Analysis in Vulnerability Detection

Authors: Ira Ceka, Feitong Qiao, Anik Dey, Aastha Valecha, Gail Kaiser, Baishakhi Ray | Published: 2024-12-16 | Updated: 2025-01-18
LLM Performance Evaluation
Prompting Strategy
Prompt Injection

Heuristic-Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models

Authors: Ma Teng, Jia Xiaojun, Duan Ranjie, Li Xinfeng, Huang Yihao, Chu Zhixuan, Liu Yang, Ren Wenqi | Published: 2024-12-08 | Updated: 2025-01-03
Content Moderation
Prompt Injection
Attack Method

ChatNVD: Advancing Cybersecurity Vulnerability Assessment with Large Language Models

Authors: Shivansh Chopra, Hussain Ahmad, Diksha Goel, Claudia Szabo | Published: 2024-12-06 | Updated: 2025-05-20
Text Generation Method
Prompt Injection
Computational Efficiency

VLSBench: Unveiling Visual Leakage in Multimodal Safety

Authors: Xuhao Hu, Dongrui Liu, Hao Li, Xuanjing Huang, Jing Shao | Published: 2024-11-29 | Updated: 2025-01-17
Prompt Injection
Safety Alignment

Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment

Authors: Soumya Suvra Ghosal, Souradip Chakraborty, Vaibhav Singh, Tianrui Guan, Mengdi Wang, Ahmad Beirami, Furong Huang, Alvaro Velasquez, Dinesh Manocha, Amrit Singh Bedi | Published: 2024-11-27 | Updated: 2025-03-20
Prompt Injection
Safety Alignment
Adversarial attack