TOP Literature Database The Hidden Risks of LLM-Generated Web Application Code: A Security-Centric Evaluation of Code Generation Capabilities in Large Language Models
arxiv
The Hidden Risks of LLM-Generated Web Application Code: A Security-Centric Evaluation of Code Generation Capabilities in Large Language Models
AI Security Portal bot
Information in the literature database is collected automatically.
These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The rapid advancement of Large Language Models (LLMs) has enhanced software
development processes, minimizing the time and effort required for coding and
enhancing developer productivity. However, despite their potential benefits,
code generated by LLMs has been shown to generate insecure code in controlled
environments, raising critical concerns about their reliability and security in
real-world applications. This paper uses predefined security parameters to
evaluate the security compliance of LLM-generated code across multiple models,
such as ChatGPT, DeepSeek, Claude, Gemini and Grok. The analysis reveals
critical vulnerabilities in authentication mechanisms, session management,
input validation and HTTP security headers. Although some models implement
security measures to a limited extent, none fully align with industry best
practices, highlighting the associated risks in automated software development.
Our findings underscore that human expertise is crucial to ensure secure
software deployment or review of LLM-generated code. Also, there is a need for
robust security assessment frameworks to enhance the reliability of
LLM-generated code in real-world applications.