These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
With the advent of large language models (LLMs), numerous software service
providers (SSPs) are dedicated to developing LLMs customized for code
generation tasks, such as CodeLlama and Copilot. However, these LLMs can be
leveraged by attackers to create malicious software, which may pose potential
threats to the software ecosystem. For example, they can automate the creation
of advanced phishing malware. To address this issue, we first conduct an
empirical study and design a prompt dataset, MCGTest, which involves
approximately 400 person-hours of work and consists of 406 malicious code
generation tasks. Utilizing this dataset, we propose MCGMark, the first robust,
code structure-aware, and encodable watermarking approach to trace
LLM-generated code. We embed encodable information by controlling the token
selection and ensuring the output quality based on probabilistic outliers.
Additionally, we enhance the robustness of the watermark by considering the
structural features of malicious code, preventing the embedding of the
watermark in easily modified positions, such as comments. We validate the
effectiveness and robustness of MCGMark on the DeepSeek-Coder. MCGMark achieves
an embedding success rate of 88.9% within a maximum output limit of 400 tokens.
Furthermore, it also demonstrates strong robustness and has minimal impact on
the quality of the output code. Our approach assists SSPs in tracing and
holding responsible parties accountable for malicious code generated by LLMs.