AIセキュリティポータル K Program
When Prompts Become Payloads: A Framework for Mitigating SQL Injection Attacks in Large Language Model-Driven Applications
Share
Abstract
Natural language interfaces to structured databases are becoming increasingly common, largely due to advances in large language models (LLMs) that enable users to query data using conversational input rather than formal query languages such as SQL. While this paradigm significantly improves usability and accessibility, it introduces new security risks, particularly the amplification of SQL injection vulnerabilities through the prompt-to-SQL translation process. Malicious users can exploit these mechanisms by crafting adversarial prompts that manipulate model behavior and generate unsafe queries. In this work, we propose a multi-layered security framework designed to detect and mitigate LLM-mediated SQL injection attacks. The framework integrates a front-end security shield for prompt sanitization, an advanced threat detection model for behavioral and semantic anomaly identification, and a signature-based control layer for known attack patterns. We evaluate the proposed framework under diverse and realistic attack scenarios, including prompt injection, obfuscated SQL payloads, and context-manipulation attacks. To ensure robustness, we generate and curate a comprehensive benchmark dataset of adversarial prompts and assess performance across a fine-tuned LLM configuration. Experimental results demonstrate that the proposed approach achieves high detection accuracy while maintaining low false-positive rates, significantly improving the secure deployment of LLM-powered database applications.
Data analytics with large language models (llm): A novel prompting framework
Almheiri, S. M. A. A., AlAnsari, M., AlHashmi, J., Abdalmajeed, N., Jalil, M., Ertek, G.
Published: 2024
A large language model framework to uncover underreporting in traffic crashes
Arteaga, C., Park, J.
Published: 2025
Mitigating insecure outputs in large language models (llms): A practical educational module
Barek, M. A., Rahman, M. M., Akter, S., Riad, A. K. I., Rahman, M. A., Shahriar, H., Rahman, A., Wu, F.
Published: 2024
Supply chain risk management and artificial intelligence: state of the art and future research directions
Baryannis, G., Validi, S., Dani, S., Antoniou, G.
Published: 2019
Developing a sql injection exploitation tool with natural language generation
Boekweg, K. I.
Published: 2024
Exposing vulnerabilities in clinical llms through data poisoning attacks: Case study in breast cancer
Das, A., Tariq, A., Batalini, F., Dhara, B., Banerjee, I.
Published: 2024
Qlora: Efficient finetuning of quantized llms
Dettmers, T., Pagnoni, A., Holtzman, A., Zettlemoyer, L.
Published: 2023
Extraction of Training Data from Fine-Tuned Large Language Models
Dhamankar, M.
Published: 2024
Detecting cypher injection with open-source network intrusion detection
Dunkin, M.
Published: 2024
Understanding and detecting sql function bugs
Fu, J., Liang, J., Wu, Z., Zhao, Y., Li, S., Jiang, Y.
Published: 2025
Exploiting programmatic behavior of llms: Dual-use through standard security attacks
Kang, D., Li, X., Stoica, I., Guestrin, C., Zaharia, M., Hashimoto, T.
Published: 2024
Analysis of sql injection attacks in the cloud and in web applications
Kumar, A., Dutta, S., Pranav, P.
Published: 2024
Large language models in law: A survey
Lai, J., Gan, W., Wu, J., Qi, Z., Philip, S. Y.
Published: 2024
Staged multi-strategy framework with open-source large language models for natural language to sql generation
Liu, C., Liao, W., Xu, Z.
Published: 2025
Prompt Injection attack against LLM-integrated Applications
Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Zihao Wang, Xiaofeng Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu
Published: 6.9.2023
Formalizing and benchmarking prompt injection attacks and defenses
Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, Neil Zhenqiang Gong
Published: 2024
Analysis of text-to-sql benchmarks: Limitations, challenges and opportunities
Mitsopoulou, A., Koutrika, G.
Published: 2025
A survey on the detection of sql injection attacks and their countermeasures
Nagpal, B., Chauhan, N., Singh, N.
Published: 2017
Llm attack dataset
Nourmohammadzadeh Motlagh, F.
Published: 2025
Sql injection attack: Detection, prioritization & prevention
Paul, A., Sharma, V., Olukoya, O.
Published: 2024
Prompt-to-sql injections in llm-integrated web applications: Risks and defenses
Pedro, R., Coimbra, M. E., Castro, D., Carreira, P., Santos, N.
Published: 2024
Tool learning with large language models: A survey
Qu, C., Dai, S., Wei, X., Cai, H., Wang, S., Yin, D., Xu, J., Wen, J.-R.
Published: 2025
Hey, That's My Model! Introducing Chain & Hash, An LLM Fingerprinting Technique
Mark Russinovich, Ahmed Salem
Published: 7.16.2024
Llm based qa chatbot builder: A generative ai-based chatbot builder for question answering
Salim, M. S., Hossain, S. I., Jalal, T., Bose, D. K., Basher, M. J. I.
Published: 2025
Large language model uncertainty proxies: discrimination and calibration for medical diagnosis and treatment
Savage, T., Wang, J., Gallo, R., Boukil, A., Patel, V., Safavi-Naini, S. A. A., Soroush, A., Chen, J. H.
Published: 2025
Sql injection attacks: Exploiting vulnerabilities in database systems
Sree, D. U., Reddy, P. H., Reddy, G. V. K., Sumanth, M.
Published: 2024
A study of nosql query injection in neo4j
Van Landuyt, D., Wijshoff, V., Joosen, W.
Published: 2024
Explanations can reduce overreliance on ai systems during decision-making
Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., Krishna, R.
Published: 2023
Research on sql injection attacks using word embedding techniques and machine learning
Venkatramulu, S., Waseem, M. S., Taneem, A., Thoutam, S. Y., Apuri, S.
Published: 2024
Efficiency-driven custom chatbot development: Unleashing langchain, rag, and performance-optimized llm fusion
Vidivelli, S., Ramachandran, M., Dharunbalaji, A.
Published: 2024
Shieldgpt: An llm-based framework for ddos mitigation
Wang, T., Xie, X., Zhang, L., Wang, C., Zhang, L., Cui, Y.
Published: 2024
Jailbroken: How Does LLM Safety Training Fail?
Alexander Wei, Nika Haghtalab, Jacob Steinhardt
Published: 7.6.2023
AirDialogue: An environment for goal-oriented dialogue research
Wei, W., Le, Q., Dai, A., Li, J.
Published: 2018
Loose-lipped large language models spill your secrets: The privacy implications of large language models
Winograd, A.
Published: 2022
Large Language Models for Cyber Security: A Systematic Literature Review
Hanxiang Xu, Shenao Wang, Ningke Li, Kailong Wang, Yanjie Zhao, Kai Chen, Ting Yu, Yang Liu, Haoyu Wang
Published: 5.8.2024
Sqlpsdem: A proxy-based mechanism towards detecting, locating and preventing second-order sql injections
Zhang, B., Ren, R., Liu, J., Jiang, M., Ren, J., Li, J.
Published: 2024
A Survey of Recent Backdoor Attacks and Defenses in Large Language Models
Shuai Zhao, Meihuizi Jia, Zhongliang Guo, Leilei Gan, Xiaoyu Xu, Xiaobao Wu, Jie Fu, Yichao Feng, Fengjun Pan, Luu Anh Tuan
Published: 6.11.2024
Judging llm-as-a-judge with mt-bench and chatbot arena
Zheng, L., Chiang, W.-L., Sheng, Y., Zhuang, S., Wu, Z., Zhuang, Y., Lin, Z., Li, Z., Li, D., Xing, E.
Published: 2023
Share