Large Language Model

Yet Another Watermark for Large Language Models

Authors: Siyuan Bao, Ying Shi, Zhiguang Yang, Hanzhou Wu, Xinpeng Zhang | Published: 2025-09-16
Prompt leaking
Large Language Model
Watermarking Technology

NeuroStrike: Neuron-Level Attacks on Aligned LLMs

Authors: Lichao Wu, Sasha Behrouzi, Mohamadreza Rostami, Maximilian Thang, Stjepan Picek, Ahmad-Reza Sadeghi | Published: 2025-09-15
Prompt Injection
Large Language Model
安全性メカニズムの分析

AttestLLM: Efficient Attestation Framework for Billion-scale On-device LLMs

Authors: Ruisi Zhang, Yifei Zhao, Neusha Javidnia, Mengxin Zheng, Farinaz Koushanfar | Published: 2025-09-08
Security Strategy Generation
Efficiency Evaluation
Large Language Model

VulnRepairEval: An Exploit-Based Evaluation Framework for Assessing Large Language Model Vulnerability Repair Capabilities

Authors: Weizhe Wang, Wei Ma, Qiang Hu, Yao Zhang, Jianfei Sun, Bin Wu, Yang Liu, Guangquan Xu, Lingxiao Jiang | Published: 2025-09-03
Prompt Injection
Large Language Model
Vulnerability Analysis

Safety Alignment Should Be Made More Than Just A Few Attention Heads

Authors: Chao Huang, Zefeng Zhang, Juewei Yue, Quangang Li, Chuang Zhang, Tingwen Liu | Published: 2025-08-27
Prompt Injection
Large Language Model
Attention Mechanism

Confusion is the Final Barrier: Rethinking Jailbreak Evaluation and Investigating the Real Misuse Threat of LLMs

Authors: Yu Yan, Sheng Sun, Zhe Wang, Yijun Lin, Zenghao Duan, zhifei zheng, Min Liu, Zhiyi yin, Jianping Zhang | Published: 2025-08-22 | Updated: 2025-09-15
Privacy Assessment
倫理基準遵守
Large Language Model

MCPSecBench: A Systematic Security Benchmark and Playground for Testing Model Context Protocols

Authors: Yixuan Yang, Daoyuan Wu, Yufan Chen | Published: 2025-08-17 | Updated: 2025-10-09
Prompt leaking
Large Language Model
Defense Mechanism

Jailbreaking Commercial Black-Box LLMs with Explicitly Harmful Prompts

Authors: Chiyu Zhang, Lu Zhou, Xiaogang Xu, Jiafei Wu, Liming Fang, Zhe Liu | Published: 2025-08-14
Social Engineering Attack
Prompt Injection
Large Language Model

EditMF: Drawing an Invisible Fingerprint for Your Large Language Models

Authors: Jiaxuan Wu, Yinghan Zhou, Wanli Peng, Yiming Xue, Juan Wen, Ping Zhong | Published: 2025-08-12
Large Language Model
Author Attribution Method
Watermark Design

Repairing vulnerabilities without invisible hands. A differentiated replication study on LLMs

Authors: Maria Camporese, Fabio Massacci | Published: 2025-07-28
Prompt Injection
Large Language Model
Vulnerability Management