These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In the current cybersecurity landscape, protecting military devices such as
communication and battlefield management systems against sophisticated cyber
attacks is crucial. Malware exploits vulnerabilities through stealth methods,
often evading traditional detection mechanisms such as software signatures. The
application of ML/DL in vulnerability detection has been extensively explored
in the literature. However, current ML/DL vulnerability detection methods
struggle with understanding the context and intent behind complex attacks.
Integrating large language models (LLMs) with system call analysis offers a
promising approach to enhance malware detection. This work presents a novel
framework leveraging LLMs to classify malware based on system call data. The
framework uses transfer learning to adapt pre-trained LLMs for malware
detection. By retraining LLMs on a dataset of benign and malicious system
calls, the models are refined to detect signs of malware activity. Experiments
with a dataset of over 1TB of system calls demonstrate that models with larger
context sizes, such as BigBird and Longformer, achieve superior accuracy and
F1-Score of approximately 0.86. The results highlight the importance of context
size in improving detection rates and underscore the trade-offs between
computational complexity and performance. This approach shows significant
potential for real-time detection in high-stakes environments, offering a
robust solution to evolving cyber threats.