Large Language Model

AttestLLM: Efficient Attestation Framework for Billion-scale On-device LLMs

Authors: Ruisi Zhang, Yifei Zhao, Neusha Javidnia, Mengxin Zheng, Farinaz Koushanfar | Published: 2025-09-08
Security Strategy Generation
Efficiency Evaluation
Large Language Model

VulnRepairEval: An Exploit-Based Evaluation Framework for Assessing Large Language Model Vulnerability Repair Capabilities

Authors: Weizhe Wang, Wei Ma, Qiang Hu, Yao Zhang, Jianfei Sun, Bin Wu, Yang Liu, Guangquan Xu, Lingxiao Jiang | Published: 2025-09-03
Prompt Injection
Large Language Model
Vulnerability Analysis

Safety Alignment Should Be Made More Than Just A Few Attention Heads

Authors: Chao Huang, Zefeng Zhang, Juewei Yue, Quangang Li, Chuang Zhang, Tingwen Liu | Published: 2025-08-27
Prompt Injection
Large Language Model
Attention Mechanism

Confusion is the Final Barrier: Rethinking Jailbreak Evaluation and Investigating the Real Misuse Threat of LLMs

Authors: Yu Yan, Sheng Sun, Zhe Wang, Yijun Lin, Zenghao Duan, zhifei zheng, Min Liu, Zhiyi yin, Jianping Zhang | Published: 2025-08-22 | Updated: 2025-09-15
Privacy Assessment
倫理基準遵守
Large Language Model

MCPSecBench: A Systematic Security Benchmark and Playground for Testing Model Context Protocols

Authors: Yixuan Yang, Daoyuan Wu, Yufan Chen | Published: 2025-08-17 | Updated: 2025-10-09
Prompt leaking
Large Language Model
Defense Mechanism

Jailbreaking Commercial Black-Box LLMs with Explicitly Harmful Prompts

Authors: Chiyu Zhang, Lu Zhou, Xiaogang Xu, Jiafei Wu, Liming Fang, Zhe Liu | Published: 2025-08-14
Social Engineering Attack
Prompt Injection
Large Language Model

EditMF: Drawing an Invisible Fingerprint for Your Large Language Models

Authors: Jiaxuan Wu, Yinghan Zhou, Wanli Peng, Yiming Xue, Juan Wen, Ping Zhong | Published: 2025-08-12
Large Language Model
Author Attribution Method
Watermark Design

Repairing vulnerabilities without invisible hands. A differentiated replication study on LLMs

Authors: Maria Camporese, Fabio Massacci | Published: 2025-07-28
Prompt Injection
Large Language Model
Vulnerability Management

ARMOR: Aligning Secure and Safe Large Language Models via Meticulous Reasoning

Authors: Zhengyue Zhao, Yingzi Ma, Somesh Jha, Marco Pavone, Patrick McDaniel, Chaowei Xiao | Published: 2025-07-14 | Updated: 2025-10-20
Large Language Model
安全性分析
評価基準

GuardVal: Dynamic Large Language Model Jailbreak Evaluation for Comprehensive Safety Testing

Authors: Peiyan Zhang, Haibo Jin, Liying Kang, Haohan Wang | Published: 2025-07-10
Prompt validation
Large Language Model
Performance Evaluation Metrics