These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The rapid evolution of modern malware presents significant challenges to the
development of effective defense mechanisms. Traditional cyber deception
techniques often rely on static or manually configured parameters, limiting
their adaptability to dynamic and sophisticated threats. This study leverages
Generative AI (GenAI) models to automate the creation of adaptive cyber
deception ploys, focusing on structured prompt engineering (PE) to enhance
relevance, actionability, and deployability. We introduce a systematic
framework (SPADE) to address inherent challenges large language models (LLMs)
pose to adaptive deceptions, including generalized outputs, ambiguity,
under-utilization of contextual information, and scalability constraints.
Evaluations across diverse malware scenarios using metrics such as Recall,
Exact Match (EM), BLEU Score, and expert quality assessments identified
ChatGPT-4o as the top performer. Additionally, it achieved high engagement
(93%) and accuracy (96%) with minimal refinements. Gemini and ChatGPT-4o Mini
demonstrated competitive performance, with Llama3.2 showing promise despite
requiring further optimization. These findings highlight the transformative
potential of GenAI in automating scalable, adaptive deception strategies and
underscore the critical role of structured PE in advancing real-world
cybersecurity applications.
External Datasets
Ground Truth Data derived from [7] which includes 94 malicious API sequences mapped to 31 malware behaviors and linked to MITRE ATT&CK techniques