Large Language Model

SpatialJB: How Text Distribution Art Becomes the “Jailbreak Key” for LLM Guardrails

Authors: Zhiyi Mou, Jingyuan Yang, Zeheng Qian, Wangze Ni, Tianfang Xiao, Ning Liu, Chen Zhang, Zhan Qin, Kui Ren | Published: 2026-01-14
LLM活用
Prompt Injection
Large Language Model

HoneyTrap: Deceiving Large Language Model Attackers to Honeypot Traps with Resilient Multi-Agent Defense

Authors: Siyuan Li, Xi Lin, Jun Wu, Zehao Liu, Haoyu Li, Tianjie Ju, Xiang Chen, Jianhua Li | Published: 2026-01-07
Prompt Injection
Large Language Model
Adversarial Attack Detection

Jailbreaking LLMs & VLMs: Mechanisms, Evaluation, and Unified Defense

Authors: Zejian Chen, Chaozhuo Li, Chao Li, Xi Zhang, Litian Zhang, Yiming He | Published: 2026-01-07
Prompt Injection
Large Language Model
Adversarial Attack Detection

On the Effectiveness of Instruction-Tuning Local LLMs for Identifying Software Vulnerabilities

Authors: Sangryu Park, Gihyuk Ko, Homook Cho | Published: 2025-12-23
Prompt Injection
Large Language Model
Vulnerability Analysis

Large Language Models as a (Bad) Security Norm in the Context of Regulation and Compliance

Authors: Kaspar Rosager Ludvigsen | Published: 2025-12-18
LLM活用
Indirect Prompt Injection
Large Language Model

FlipLLM: Efficient Bit-Flip Attacks on Multimodal LLMs using Reinforcement Learning

Authors: Khurram Khalil, Khaza Anuarul Hoque | Published: 2025-12-10
Prompt Injection
Large Language Model
Vulnerability Assessment Method

Attention is All You Need to Defend Against Indirect Prompt Injection Attacks in LLMs

Authors: Yinan Zhong, Qianhao Miao, Yanjiao Chen, Jiangyi Deng, Yushi Cheng, Wenyuan Xu | Published: 2025-12-09
Indirect Prompt Injection
Prompt validation
Large Language Model

SoK: a Comprehensive Causality Analysis Framework for Large Language Model Security

Authors: Wei Zhao, Zhe Li, Jun Sun | Published: 2025-12-04
Prompt Injection
因果推論
Large Language Model

Benchmarking and Understanding Safety Risks in AI Character Platforms

Authors: Yiluo Wei, Peixian Zhang, Gareth Tyson | Published: 2025-12-01
キャラクターのメタデータ収集
Risk Assessment
Large Language Model

Adversarial Attack-Defense Co-Evolution for LLM Safety Alignment via Tree-Group Dual-Aware Search and Optimization

Authors: Xurui Li, Kaisong Song, Rui Zhu, Pin-Yu Chen, Haixu Tang | Published: 2025-11-24
Prompt Injection
Large Language Model
Malicious Prompt