These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Despite various approaches being employed to detect vulnerabilities, the
number of reported vulnerabilities shows an upward trend over the years. This
suggests the problems are not caught before the code is released, which could
be caused by many factors, like lack of awareness, limited efficacy of the
existing vulnerability detection tools or the tools not being user-friendly. To
help combat some issues with traditional vulnerability detection tools, we
propose using large language models (LLMs) to assist in finding vulnerabilities
in source code. LLMs have shown a remarkable ability to understand and generate
code, underlining their potential in code-related tasks. The aim is to test
multiple state-of-the-art LLMs and identify the best prompting strategies,
allowing extraction of the best value from the LLMs. We provide an overview of
the strengths and weaknesses of the LLM-based approach and compare the results
to those of traditional static analysis tools. We find that LLMs can pinpoint
many more issues than traditional static analysis tools, outperforming
traditional tools in terms of recall and F1 scores. The results should benefit
software developers and security analysts responsible for ensuring that the
code is free of vulnerabilities.