Focus Session: LLM4PQC — An Agentic Framework for Accurate and Efficient Synthesis of PQC Cores Authors: Buddhi Perera, Zeng Wang, Weihua Xiao, Mohammed Nabeel, Ozgur Sinanoglu, Johann Knechtel, Ramesh Karri | Published: 2026-02-10 LLM Performance EvaluationHardware AcceleratorPrompt leaking 2026.02.10 2026.02.12 Literature Database
A Behavioral Fingerprint for Large Language Models: Provenance Tracking via Refusal Vectors Authors: Zhenyu Xu, Victor S. Sheng | Published: 2026-02-10 Disabling Safety Mechanisms of LLMLLM Performance Evaluationevaluation metrics 2026.02.10 2026.02.12 Literature Database
LLMAC: A Global and Explainable Access Control Framework with Large Language Model Authors: Sharif Noor Zisad, Ragib Hasan | Published: 2026-02-10 LLM Performance EvaluationPoisoning attack on RAGアクセス制御モデル 2026.02.10 2026.02.12 Literature Database
Towards Real-World Industrial-Scale Verification: LLM-Driven Theorem Proving on seL4 Authors: Jianyu Zhang, Fuyuan Zhang, Jiayi Lu, Jilin Hu, Xiaoyi Yin, Long Zhang, Feng Yang, Yongwang Zhao | Published: 2026-02-09 LLM Performance EvaluationProgram UnderstandingTransparency and Verification 2026.02.09 2026.02.11 Literature Database
InfiCoEvalChain: A Blockchain-Based Decentralized Framework for Collaborative LLM Evaluation Authors: Yifan Yang, Jinjia Li, Kunxi Li, Puhao Zheng, Yuanyi Wang, Zheyan Qu, Yang Yu, Jianmin Wu, Ming Li, Hongxia Yang | Published: 2026-02-09 LLM Performance EvaluationIncentive MechanismModel evaluation methods 2026.02.09 2026.02.11 Literature Database
BadTemplate: A Training-Free Backdoor Attack via Chat Template Against Large Language Models Authors: Zihan Wang, Hongwei Li, Rui Zhang, Wenbo Jiang, Guowen Xu | Published: 2026-02-05 LLM Performance Evaluationデータ毒性Large Language Model 2026.02.05 2026.02.07 Literature Database
SynAT: Enhancing Security Knowledge Bases via Automatic Synthesizing Attack Tree from Crowd Discussions Authors: Ziyou Jiang, Lin Shi, Guowei Yang, Xuyan Ma, Fenglong Li, Qing Wang | Published: 2026-02-05 LLM Performance EvaluationSafety of Data Generation攻撃ツリー合成 2026.02.05 2026.02.07 Literature Database
Hallucination-Resistant Security Planning with a Large Language Model Authors: Kim Hammar, Tansu Alpcan, Emil Lupu | Published: 2026-02-05 LLM Performance EvaluationHallucinationDetection of Hallucinations 2026.02.05 2026.02.07 Literature Database
How Few-shot Demonstrations Affect Prompt-based Defenses Against LLM Jailbreak Attacks Authors: Yanshu Wang, Shuaishuai Yang, Jingjing He, Tong Yang | Published: 2026-02-04 LLM Performance EvaluationPrompt InjectionLarge Language Model 2026.02.04 2026.02.06 Literature Database
LogicScan: An LLM-driven Framework for Detecting Business Logic Vulnerabilities in Smart Contracts Authors: Jiaqi Gao, Zijian Zhang, Yuqiang Sun, Ye Liu, Chengwei Liu, Han Liu, Yi Li, Yang Liu | Published: 2026-02-03 LLM Performance Evaluationスマートコントラクト攻撃Prompt leaking 2026.02.03 2026.02.05 Literature Database