Prompt Injection

APFuzz: Towards Automatic Greybox Protocol Fuzzing

Authors: Yu Wang, Yang Xiang, Chandra Thapa, Hajime Suzuki | Published: 2026-02-25
プロトコルファジング
Prompt Injection
Research Methodology

An Explainable Memory Forensics Approach for Malware Analysis

Authors: Silvia Lucia Sanna, Davide Maiorca, Giorgio Giacinto | Published: 2026-02-23
Forensic Report
Prompt Injection
Malware Detection Method

What Breaks Embodied AI Security:LLM Vulnerabilities, CPS Flaws,or Something Else?

Authors: Boyang Ma, Hechuan Guo, Peizhuo Lv, Minghui Xu, Xuelong Dai, YeChao Zhang, Yijun Yang, Yue Zhang | Published: 2026-02-19
Indirect Prompt Injection
セキュリティ課題
Prompt Injection

Fail-Closed Alignment for Large Language Models

Authors: Zachary Coalson, Beth Sohler, Aiden Gabriel, Sanghyun Hong | Published: 2026-02-19
Prompt Injection
Robustness Evaluation
Defense Method

Mind the Gap: Evaluating LLMs for High-Level Malicious Package Detection vs. Fine-Grained Indicator Identification

Authors: Ahmed Ryan, Ibrahim Khalil, Abdullah Al Jahid, Md Erfan, Akond Ashfaque Ur Rahman, Md Rayhanur Rahman | Published: 2026-02-18
LLM Performance Evaluation
Indirect Prompt Injection
Prompt Injection

A Content-Based Framework for Cybersecurity Refusal Decisions in Large Language Models

Authors: Meirav Segal, Noa Linder, Omer Antverg, Gil Gekker, Tomer Fichman, Omri Bodenheimer, Edan Maor, Omer Nevo | Published: 2026-02-17
Prompt Injection
Threat Model
Defense Method

Exposing the Systematic Vulnerability of Open-Weight Models to Prefill Attacks

Authors: Lukas Struppek, Adam Gleave, Kellin Pelrine | Published: 2026-02-16
Prompt Injection
Human Rights and Technology
攻撃成功率

DeepSight: An All-in-One LM Safety Toolkit

Authors: Bo Zhang, Jiaxuan Guo, Lijun Li, Dongrui Liu, Sujin Chen, Guanxu Chen, Zhijie Zheng, Qihao Lin, Lewen Yan, Chen Qian, Yijin Zhou, Yuyao Wu, Shaoxiong Guo, Tianyi Du, Jingyi Yang, Xuhao Hu, Ziqi Miao, Xiaoya Lu, Jing Shao, Xia Hu | Published: 2026-02-12
Prompt Injection
Large Language Model
Evaluation Method

Differentially Private and Communication Efficient Large Language Model Split Inference via Stochastic Quantization and Soft Prompt

Authors: Yujie Gu, Richeng Jin, Xiaoyu Ji, Yier Jin, Wenyuan Xu | Published: 2026-02-12
Privacy Assurance
Prompt Injection
Prompt leaking

Jailbreaking Leaves a Trace: Understanding and Detecting Jailbreak Attacks from Internal Representations of Large Language Models

Authors: Sri Durga Sai Sowmya Kadali, Evangelos E. Papalexakis | Published: 2026-02-12
Prompt Injection
Experimental Validation
Evaluation Method