Prompt Injection

Information Security Based on LLM Approaches: A Review

Authors: Chang Gong, Zhongwen Li, Xiaoqi Li | Published: 2025-07-24
Network Traffic Analysis
Prompt Injection
Prompt leaking

Tab-MIA: A Benchmark Dataset for Membership Inference Attacks on Tabular Data in LLMs

Authors: Eyal German, Sagiv Antebi, Daniel Samira, Asaf Shabtai, Yuval Elovici | Published: 2025-07-23
Relationship of AI Systems
Property Inference Attack
Prompt Injection

Depth Gives a False Sense of Privacy: LLM Internal States Inversion

Authors: Tian Dong, Yan Meng, Shaofeng Li, Guoxing Chen, Zhen Liu, Haojin Zhu | Published: 2025-07-22
Prompt Injection
Prompt leaking
Attack Method

Attacking interpretable NLP systems

Authors: Eldor Abdukhamidov, Tamer Abuhmed, Joanna C. S. Santos, Mohammed Abuhamad | Published: 2025-07-22
Prompt Injection
Prompt validation
Adversarial Attack Methods

Multi-Stage Prompt Inference Attacks on Enterprise LLM Systems

Authors: Andrii Balashov, Olena Ponomarova, Xiaohua Zhai | Published: 2025-07-21
Indirect Prompt Injection
Prompt Injection
Attack Detection

LLAMA: Multi-Feedback Smart Contract Fuzzing Framework with LLM-Guided Seed Generation

Authors: Keke Gai, Haochen Liang, Jing Yu, Liehuang Zhu, Dusit Niyato | Published: 2025-07-16
Prompt Injection
Initial Seed Generation
Performance Evaluation Metrics

Can Large Language Models Improve Phishing Defense? A Large-Scale Controlled Experiment on Warning Dialogue Explanations

Authors: Federico Maria Cau, Giuseppe Desolda, Francesco Greco, Lucio Davide Spano, Luca Viganò | Published: 2025-07-10
Indirect Prompt Injection
Performance Evaluation
Prompt Injection

Hybrid LLM-Enhanced Intrusion Detection for Zero-Day Threats in IoT Networks

Authors: Mohammad F. Al-Hammouri, Yazan Otoum, Rasha Atwa, Amiya Nayak | Published: 2025-07-10
Hybrid Algorithm
Prompt Injection
Large Language Model

Phishing Detection in the Gen-AI Era: Quantized LLMs vs Classical Models

Authors: Jikesh Thapa, Gurrehmat Chahal, Serban Voinea Gabreanu, Yazan Otoum | Published: 2025-07-10
Performance Evaluation
Prompt Injection
次世代フィッシング検出

CAVGAN: Unifying Jailbreak and Defense of LLMs via Generative Adversarial Attacks on their Internal Representations

Authors: Xiaohu Li, Yunfeng Ning, Zepeng Bao, Mayi Xu, Jianhao Chen, Tieyun Qian | Published: 2025-07-08
Prompt Injection
Adversarial attack
Defense Effectiveness Analysis