Large language models (LLMs) have brought significant advancements to code
generation, benefiting both novice and experienced developers. However, their
training using unsanitized data from open-source repositories, like GitHub,
introduces the risk of inadvertently propagating security vulnerabilities. To
effectively mitigate this concern, this paper presents a comprehensive study
focused on evaluating and enhancing code LLMs from a software security
perspective. We introduce SecuCoGen\footnote{SecuCoGen has been uploaded as
supplemental material and will be made publicly available after publication.},
a meticulously curated dataset targeting 21 critical vulnerability types.
SecuCoGen comprises 180 samples and serves as the foundation for conducting
experiments on three crucial code-related tasks: code generation, code repair
and vulnerability classification, with a strong emphasis on security. Our
experimental results reveal that existing models often overlook security
concerns during code generation, leading to the generation of vulnerable code.
To address this, we propose effective approaches to mitigate the security
vulnerabilities and enhance the overall robustness of code generated by LLMs.
Moreover, our study identifies weaknesses in existing models' ability to repair
vulnerable code, even when provided with vulnerability information.
Additionally, certain vulnerability types pose challenges for the models,
hindering their performance in vulnerability classification. Based on these
findings, we believe our study will have a positive impact on the software
engineering community, inspiring the development of improved methods for
training and utilizing LLMs, thereby leading to safer and more trustworthy
model deployment.