These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
As blockchain technology and smart contracts become widely adopted, securing
them throughout every stage of the transaction process is essential. The
concern of improved security for smart contracts is to find and detect
vulnerabilities using classical Machine Learning (ML) models and fine-tuned
Large Language Models (LLM). The robustness of such work rests on a labeled
smart contract dataset that includes annotated vulnerabilities on which several
LLMs alongside various traditional machine learning algorithms such as
DistilBERT model is trained and tested. We train and test machine learning
algorithms to classify smart contract codes according to vulnerability types in
order to compare model performance. Having fine-tuned the LLMs specifically for
smart contract code classification should help in getting better results when
detecting several types of well-known vulnerabilities, such as Reentrancy,
Integer Overflow, Timestamp Dependency and Dangerous Delegatecall. From our
initial experimental results, it can be seen that our fine-tuned LLM surpasses
the accuracy of any other model by achieving an accuracy of over 90%, and this
advances the existing vulnerability detection benchmarks. Such performance
provides a great deal of evidence for LLMs ability to describe the subtle
patterns in the code that traditional ML models could miss. Thus, we compared
each of the ML and LLM models to give a good overview of each models strengths,
from which we can choose the most effective one for real-world applications in
smart contract security. Our research combines machine learning and large
language models to provide a rich and interpretable framework for detecting
different smart contract vulnerabilities, which lays a foundation for a more
secure blockchain ecosystem.