These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Recent years have seen an explosion of activity in Generative AI,
specifically Large Language Models (LLMs), revolutionising applications across
various fields. Smart contract vulnerability detection is no exception; as
smart contracts exist on public chains and can have billions of dollars
transacted daily, continuous improvement in vulnerability detection is crucial.
This has led to many researchers investigating the usage of generative large
language models (LLMs) to aid in detecting vulnerabilities in smart contracts.
This paper presents a systematic review of the current LLM-based smart
contract vulnerability detection tools, comparing them against traditional
static and dynamic analysis tools Slither and Mythril. Our analysis highlights
key areas where each performs better and shows that while these tools show
promise, the LLM-based tools available for testing are not ready to replace
more traditional tools. We conclude with recommendations on how LLMs are best
used in the vulnerability detection process and offer insights for improving on
the state-of-the-art via hybrid approaches and targeted pre-training of much
smaller models.