These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Recent advancements have led to the widespread adoption of code-oriented
large language models (Code LLMs) for programming tasks. Despite their success
in deployment, their security research is left far behind. This paper
introduces a new attack paradigm: (automatic) external prompt injection against
Code LLMs, where attackers generate concise, non-functional induced
perturbations and inject them within a victim's code context. These induced
perturbations can be disseminated through commonly used dependencies (e.g.,
packages or RAG's knowledge base), manipulating Code LLMs to achieve malicious
objectives during the code completion process. Compared to existing attacks,
this method is more realistic and threatening: it does not necessitate control
over the model's training process, unlike backdoor attacks, and can achieve
specific malicious objectives that are challenging for adversarial attacks.
Furthermore, we propose ShadowCode, a simple yet effective method that
automatically generates induced perturbations based on code simulation to
achieve effective and stealthy external prompt injection. ShadowCode designs
its perturbation optimization objectives by simulating realistic code contexts
and employs a greedy optimization approach with two enhancement modules:
forward reasoning enhancement and keyword-based perturbation design. We
evaluate our method across 13 distinct malicious objectives, generating 31
threat cases spanning three popular programming languages. Our results
demonstrate that ShadowCode successfully attacks three representative
open-source Code LLMs (achieving up to a 97.9% attack success rate) and two
mainstream commercial Code LLM-integrated applications (with over 90% attack
success rate) across all threat cases, using only a 12-token non-functional
induced perturbation. The code is available at
https://github.com/LianPing-cyber/ShadowCodeEPI.