Large Language Model

EvoMail: Self-Evolving Cognitive Agents for Adaptive Spam and Phishing Email Defense

Authors: Wei Huang, De-Tian Chu, Lin-Yuan Bai, Wei Kang, Hai-Tao Zhang, Bo Li, Zhi-Mo Han, Jing Ge, Hai-Feng Lin | Published: 2025-09-25
Phishing Attack
Large Language Model
Self-Evolving Framework

LLM-based Vulnerability Discovery through the Lens of Code Metrics

Authors: Felix Weissberg, Lukas Pirch, Erik Imgrund, Jonas Möller, Thorsten Eisenhofer, Konrad Rieck | Published: 2025-09-23
コードメトリクス評価
Prompt Injection
Large Language Model

LLM Jailbreak Detection for (Almost) Free!

Authors: Guorui Chen, Yifan Xia, Xiaojun Jia, Zhijiang Li, Philip Torr, Jindong Gu | Published: 2025-09-18
Large Language Model
Evaluation Method
Watermarking Technology

Yet Another Watermark for Large Language Models

Authors: Siyuan Bao, Ying Shi, Zhiguang Yang, Hanzhou Wu, Xinpeng Zhang | Published: 2025-09-16
Prompt leaking
Large Language Model
Watermarking Technology

NeuroStrike: Neuron-Level Attacks on Aligned LLMs

Authors: Lichao Wu, Sasha Behrouzi, Mohamadreza Rostami, Maximilian Thang, Stjepan Picek, Ahmad-Reza Sadeghi | Published: 2025-09-15
Prompt Injection
Large Language Model
安全性メカニズムの分析

AttestLLM: Efficient Attestation Framework for Billion-scale On-device LLMs

Authors: Ruisi Zhang, Yifei Zhao, Neusha Javidnia, Mengxin Zheng, Farinaz Koushanfar | Published: 2025-09-08
Security Strategy Generation
Efficiency Evaluation
Large Language Model

VulnRepairEval: An Exploit-Based Evaluation Framework for Assessing Large Language Model Vulnerability Repair Capabilities

Authors: Weizhe Wang, Wei Ma, Qiang Hu, Yao Zhang, Jianfei Sun, Bin Wu, Yang Liu, Guangquan Xu, Lingxiao Jiang | Published: 2025-09-03
Prompt Injection
Large Language Model
Vulnerability Analysis

Safety Alignment Should Be Made More Than Just A Few Attention Heads

Authors: Chao Huang, Zefeng Zhang, Juewei Yue, Quangang Li, Chuang Zhang, Tingwen Liu | Published: 2025-08-27
Prompt Injection
Large Language Model
Attention Mechanism

Confusion is the Final Barrier: Rethinking Jailbreak Evaluation and Investigating the Real Misuse Threat of LLMs

Authors: Yu Yan, Sheng Sun, Zhe Wang, Yijun Lin, Zenghao Duan, zhifei zheng, Min Liu, Zhiyi yin, Jianping Zhang | Published: 2025-08-22 | Updated: 2025-09-15
Privacy Assessment
倫理基準遵守
Large Language Model

Jailbreaking Commercial Black-Box LLMs with Explicitly Harmful Prompts

Authors: Chiyu Zhang, Lu Zhou, Xiaogang Xu, Jiafei Wu, Liming Fang, Zhe Liu | Published: 2025-08-14
Social Engineering Attack
Prompt Injection
Large Language Model