LLM Performance Evaluation

IMMACULATE: A Practical LLM Auditing Framework via Verifiable Computation

Authors: Yanpei Guo, Wenjie Qu, Linyu Wu, Shengfang Zhai, Lionel Z. Wang, Ming Xu, Yue Liu, Binhang Yuan, Dawn Song, Jiaheng Zhang | Published: 2026-02-26
LLM Performance Evaluation
Model evaluation methods
監査手法

Red-Teaming Claude Opus and ChatGPT-based Security Advisors for Trusted Execution Environments

Authors: Kunal Mukherjee | Published: 2026-02-23
LLM Performance Evaluation
Prompt leaking
Vulnerability Analysis

Mind the Gap: Evaluating LLMs for High-Level Malicious Package Detection vs. Fine-Grained Indicator Identification

Authors: Ahmed Ryan, Ibrahim Khalil, Abdullah Al Jahid, Md Erfan, Akond Ashfaque Ur Rahman, Md Rayhanur Rahman | Published: 2026-02-18
LLM Performance Evaluation
Indirect Prompt Injection
Prompt Injection

Focus Session: LLM4PQC — An Agentic Framework for Accurate and Efficient Synthesis of PQC Cores

Authors: Buddhi Perera, Zeng Wang, Weihua Xiao, Mohammed Nabeel, Ozgur Sinanoglu, Johann Knechtel, Ramesh Karri | Published: 2026-02-10
LLM Performance Evaluation
Hardware Accelerator
Prompt leaking

A Behavioral Fingerprint for Large Language Models: Provenance Tracking via Refusal Vectors

Authors: Zhenyu Xu, Victor S. Sheng | Published: 2026-02-10
Disabling Safety Mechanisms of LLM
LLM Performance Evaluation
evaluation metrics

LLMAC: A Global and Explainable Access Control Framework with Large Language Model

Authors: Sharif Noor Zisad, Ragib Hasan | Published: 2026-02-10
LLM Performance Evaluation
Poisoning attack on RAG
アクセス制御モデル

Towards Real-World Industrial-Scale Verification: LLM-Driven Theorem Proving on seL4

Authors: Jianyu Zhang, Fuyuan Zhang, Jiayi Lu, Jilin Hu, Xiaoyi Yin, Long Zhang, Feng Yang, Yongwang Zhao | Published: 2026-02-09
LLM Performance Evaluation
Program Understanding
Transparency and Verification

InfiCoEvalChain: A Blockchain-Based Decentralized Framework for Collaborative LLM Evaluation

Authors: Yifan Yang, Jinjia Li, Kunxi Li, Puhao Zheng, Yuanyi Wang, Zheyan Qu, Yang Yu, Jianmin Wu, Ming Li, Hongxia Yang | Published: 2026-02-09
LLM Performance Evaluation
Incentive Mechanism
Model evaluation methods

BadTemplate: A Training-Free Backdoor Attack via Chat Template Against Large Language Models

Authors: Zihan Wang, Hongwei Li, Rui Zhang, Wenbo Jiang, Guowen Xu | Published: 2026-02-05
LLM Performance Evaluation
データ毒性
Large Language Model

SynAT: Enhancing Security Knowledge Bases via Automatic Synthesizing Attack Tree from Crowd Discussions

Authors: Ziyou Jiang, Lin Shi, Guowei Yang, Xuyan Ma, Fenglong Li, Qing Wang | Published: 2026-02-05
LLM Performance Evaluation
Safety of Data Generation
攻撃ツリー合成