These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Large Language Models (LLMs) have the potential to enhance Agent-Based
Modeling by better representing complex interdependent cybersecurity systems,
improving cybersecurity threat modeling and risk management. However,
evaluating LLMs in this context is crucial for legal compliance and effective
application development. Existing LLM evaluation frameworks often overlook the
human factor and cognitive computing capabilities essential for interdependent
cybersecurity. To address this gap, I propose OllaBench, a novel evaluation
framework that assesses LLMs' accuracy, wastefulness, and consistency in
answering scenario-based information security compliance and non-compliance
questions. OllaBench is built on a foundation of 24 cognitive behavioral
theories and empirical evidence from 38 peer-reviewed papers. OllaBench was
used to evaluate 21 LLMs, including both open-weight and commercial models from
OpenAI, Anthropic, Google, Microsoft, Meta and so on. The results reveal that
while commercial LLMs have the highest overall accuracy scores, there is
significant room for improvement. Smaller low-resolution open-weight LLMs are
not far behind in performance, and there are significant differences in token
efficiency and consistency among the evaluated models. OllaBench provides a
user-friendly interface and supports a wide range of LLM platforms, making it a
valuable tool for researchers and solution developers in the field of
human-centric interdependent cybersecurity and beyond.
External Datasets
OllaBench benchmark dataset (10,000 items)
CyberQ (4000 items)
The ETHICS benchmark dataset (130,000 items)
Multiple benchmark datasets for evaluating LLMs’ legal reasoning
Multi-level benchmark dataset with attack/defense enhanced scenarios
Benchmark dataset to measure LLMs’ performance on false-belief tasks
4950 scenarios to measure LLMs vulnerability reasoning