These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Originating from semantic bugs, Entity-Inconsistency Bugs (EIBs) involve
misuse of syntactically valid yet incorrect program entities, such as variable
identifiers and function names, which often have security implications. Unlike
straightforward syntactic vulnerabilities, EIBs are subtle and can remain
undetected for years. Traditional detection methods, such as static analysis
and dynamic testing, often fall short due to the versatile and
context-dependent nature of EIBs. However, with advancements in Large Language
Models (LLMs) like GPT-4, we believe LLM-powered automatic EIB detection
becomes increasingly feasible through these models' semantics understanding
abilities. This research first undertakes a systematic measurement of LLMs'
capabilities in detecting EIBs, revealing that GPT-4, while promising, shows
limited recall and precision that hinder its practical application. The primary
problem lies in the model's tendency to focus on irrelevant code snippets
devoid of EIBs. To address this, we introduce a novel, cascaded EIB detection
system named WitheredLeaf, which leverages smaller, code-specific language
models to filter out most negative cases and mitigate the problem, thereby
significantly enhancing the overall precision and recall. We evaluated
WitheredLeaf on 154 Python and C GitHub repositories, each with over 1,000
stars, identifying 123 new flaws, 45% of which can be exploited to disrupt the
program's normal operations. Out of 69 submitted fixes, 27 have been
successfully merged.