COGNITION: From Evaluation to Defense against Multimodal LLM CAPTCHA Solvers Authors: Junyu Wang, Changjia Zhu, Yuanbo Zhou, Lingyao Li, Xu He, Junjie Xiong | Published: 2025-12-02 Prompt leakingModel Performance EvaluationModel Extraction Attack 2025.12.02 2025.12.04 Literature Database
Explainable and Resilient ML-Based Physical-Layer Attack Detectors Authors: Aleksandra Knapińska, Marija Furdek | Published: 2025-09-30 Model InversionModel Performance Evaluation物理層攻撃検出 2025.09.30 2025.10.02 Literature Database
SeedPrints: Fingerprints Can Even Tell Which Seed Your Large Language Model Was Trained From Authors: Yao Tong, Haonan Wang, Siquan Li, Kenji Kawaguchi, Tianyang Hu | Published: 2025-09-30 Token Distribution AnalysisHallucinationModel Performance Evaluation 2025.09.30 2025.10.02 Literature Database
Respond to Change with Constancy: Instruction-tuning with LLM for Non-I.I.D. Network Traffic Classification Authors: Xinjie Lin, Gang Xiong, Gaopeng Gou, Wenqi Dong, Jing Yu, Zhen Li, Wei Xia | Published: 2025-05-27 トラフィック分類Model Performance EvaluationStructural Learning 2025.05.27 2025.05.29 Literature Database
DFIR-Metric: A Benchmark Dataset for Evaluating Large Language Models in Digital Forensics and Incident Response Authors: Bilel Cherif, Tamas Bisztray, Richard A. Dubniczky, Aaesha Aldahmani, Saeed Alshehhi, Norbert Tihanyi | Published: 2025-05-26 HallucinationModel Performance EvaluationEvaluation Method 2025.05.26 2025.05.28 Literature Database
What Really Matters in Many-Shot Attacks? An Empirical Study of Long-Context Vulnerabilities in LLMs Authors: Sangyeop Kim, Yohan Lee, Yongwoo Song, Kimin Lee | Published: 2025-05-26 Prompt InjectionModel Performance EvaluationLarge Language Model 2025.05.26 2025.05.28 Literature Database
CTI-HAL: A Human-Annotated Dataset for Cyber Threat Intelligence Analysis Authors: Sofia Della Penna, Roberto Natella, Vittorio Orbinato, Lorenzo Parracino, Luciano Pianese | Published: 2025-04-08 LLM ApplicationModel Performance EvaluationLarge Language Model 2025.04.08 2025.05.27 Literature Database
Separator Injection Attack: Uncovering Dialogue Biases in Large Language Models Caused by Role Separators Authors: Xitao Li, Haijun Wang, Jiang Wu, Ting Liu | Published: 2025-04-08 Indirect Prompt InjectionPrompting StrategyModel Performance Evaluation 2025.04.08 2025.05.27 Literature Database
Enhancing Smart Contract Vulnerability Detection in DApps Leveraging Fine-Tuned LLM Authors: Jiuyang Bu, Wenkai Li, Zongwei Li, Zeng Zhang, Xiaoqi Li | Published: 2025-04-07 Smart ContractModel Performance EvaluationVulnerability Analysis 2025.04.07 2025.05.27 Literature Database
Are You Getting What You Pay For? Auditing Model Substitution in LLM APIs Authors: Will Cai, Tianneng Shi, Xuandong Zhao, Dawn Song | Published: 2025-04-07 Identification of AI OutputAPI SecurityModel Performance Evaluation 2025.04.07 2025.05.27 Literature Database