These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
While prior studies have explored security in code generated by ChatGPT and
other Large Language Models, they were conducted in controlled experimental
settings and did not use code generated or provided from actual developer
interactions. This paper not only examines the security of code generated by
ChatGPT based on real developer interactions, curated in the DevGPT dataset,
but also assesses ChatGPT's capability to find and fix these vulnerabilities.
We analysed 1,586 C, C++, and C# code snippets using static scanners, which
detected potential issues in 124 files. After manual analysis, we selected 26
files with 32 confirmed vulnerabilities for further investigation.
We submitted these files to ChatGPT via the OpenAI API, asking it to detect
security issues, identify the corresponding Common Weakness Enumeration
numbers, and propose fixes. The responses and modified code were manually
reviewed and re-scanned for vulnerabilities. ChatGPT successfully detected 18
out of 32 security issues and resolved 17 issues but failed to recognize or fix
the remainder. Interestingly, only 10 vulnerabilities were resulted from the
user prompts, while 22 were introduced by ChatGPT itself.
We highlight for developers that code generated by ChatGPT is more likely to
contain vulnerabilities compared to their own code. Furthermore, at times
ChatGPT reports incorrect information with apparent confidence, which may
mislead less experienced developers. Our findings confirm previous studies in
demonstrating that ChatGPT is not sufficiently reliable for generating secure
code nor identifying all vulnerabilities, highlighting the continuing
importance of static scanners and manual review.