These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The increasing deployment of large language models (LLMs) in the
cybersecurity domain underscores the need for effective model selection and
evaluation. However, traditional evaluation methods often overlook specific
cybersecurity knowledge gaps that contribute to performance limitations. To
address this, we develop CSEBenchmark, a fine-grained cybersecurity evaluation
framework based on 345 knowledge points expected of cybersecurity experts.
Drawing from cognitive science, these points are categorized into factual,
conceptual, and procedural types, enabling the design of 11,050 tailored
multiple-choice questions. We evaluate 12 popular LLMs on CSEBenchmark and find
that even the best-performing model achieves only 85.42% overall accuracy, with
particular knowledge gaps in the use of specialized tools and uncommon
commands. Different LLMs have unique knowledge gaps. Even large models from the
same family may perform poorly on knowledge points where smaller models excel.
By identifying and addressing specific knowledge gaps in each LLM, we achieve
up to an 84% improvement in correcting previously incorrect predictions across
three existing benchmarks for two cybersecurity tasks. Furthermore, our
assessment of each LLM's knowledge alignment with specific cybersecurity roles
reveals that different models align better with different roles, such as GPT-4o
for the Google Senior Intelligence Analyst and Deepseek-V3 for the Amazon
Privacy Engineer. These findings underscore the importance of aligning LLM
selection with the specific knowledge requirements of different cybersecurity
roles for optimal performance.