Software developers frequently hard-code credentials such as passwords,
generic secrets, private keys, and generic tokens in software repositories,
even though it is strictly advised against due to the severe threat to the
security of the software. These credentials create attack surfaces exploitable
by a potential adversary to conduct malicious exploits such as backdoor
attacks. Recent detection efforts utilize embedding models to vectorize textual
credentials before passing them to classifiers for predictions. However, these
models struggle to discriminate between credentials with contextual and complex
sequences resulting in high false positive predictions. Context-dependent
Pre-trained Language Models (PLMs) or Large Language Models (LLMs) such as
Generative Pre-trained Transformers (GPT) tackled this drawback by leveraging
the transformer neural architecture capacity for self-attention to capture
contextual dependencies between words in input sequences. As a result, GPT has
achieved wide success in several natural language understanding endeavors.
Hence, we assess LLMs to represent these observations and feed extracted
embedding vectors to a deep learning classifier to detect hard-coded
credentials. Our model outperforms the current state-of-the-art by 13% in F1
measure on the benchmark dataset. We have made all source code and data
publicly available to facilitate the reproduction of all results presented in
this paper.