These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In this paper, we introduce SecQA, a novel dataset tailored for evaluating
the performance of Large Language Models (LLMs) in the domain of computer
security. Utilizing multiple-choice questions generated by GPT-4 based on the
"Computer Systems Security: Planning for Success" textbook, SecQA aims to
assess LLMs' understanding and application of security principles. We detail
the structure and intent of SecQA, which includes two versions of increasing
complexity, to provide a concise evaluation across various difficulty levels.
Additionally, we present an extensive evaluation of prominent LLMs, including
GPT-3.5-Turbo, GPT-4, Llama-2, Vicuna, Mistral, and Zephyr models, using both
0-shot and 5-shot learning settings. Our results, encapsulated in the SecQA v1
and v2 datasets, highlight the varying capabilities and limitations of these
models in the computer security context. This study not only offers insights
into the current state of LLMs in understanding security-related content but
also establishes SecQA as a benchmark for future advancements in this critical
research area.