Today's cyber defenders are overwhelmed by a deluge of security alerts,
threat intelligence signals, and shifting business context, creating an urgent
need for AI systems to enhance operational security work. While Large Language
Models (LLMs) have the potential to automate and scale Security Operations
Center (SOC) operations, existing evaluations do not fully assess the scenarios
most relevant to real-world defenders. This lack of informed evaluation impacts
both AI developers and those applying LLMs to SOC automation. Without clear
insight into LLM performance in real-world security scenarios, developers lack
a north star for development, and users cannot reliably select the most
effective models. Meanwhile, malicious actors are using AI to scale cyber
attacks, highlighting the need for open source benchmarks to drive adoption and
community-driven improvement among defenders and model developers. To address
this, we introduce CyberSOCEval, a new suite of open source benchmarks within
CyberSecEval 4. CyberSOCEval includes benchmarks tailored to evaluate LLMs in
two tasks: Malware Analysis and Threat Intelligence Reasoning--core defensive
domains with inadequate coverage in current benchmarks. Our evaluations show
that larger, more modern LLMs tend to perform better, confirming the training
scaling laws paradigm. We also find that reasoning models leveraging test time
scaling do not achieve the same boost as in coding and math, suggesting these
models have not been trained to reason about cybersecurity analysis, and
pointing to a key opportunity for improvement. Finally, current LLMs are far
from saturating our evaluations, showing that CyberSOCEval presents a
significant challenge for AI developers to improve cyber defense capabilities.