These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Many developers rely on Large Language Models (LLMs) to facilitate software
development. Nevertheless, these models have exhibited limited capabilities in
the security domain. We introduce LLMSecGuard, a framework to offer enhanced
code security through the synergy between static code analyzers and LLMs.
LLMSecGuard is open source and aims to equip developers with code solutions
that are more secure than the code initially generated by LLMs. This framework
also has a benchmarking feature, aimed at providing insights into the evolving
security attributes of these models.