Deep Learning for Contextualized NetFlow-Based Network Intrusion Detection: Methods, Data, Evaluation and Deployment Authors: Abdelkader El Mahdaouy, Issam Ait Yahia, Soufiane Oualil, Ismail Berrada | Published: 2026-02-05 Graph Neural Networkストリーミング状態管理異常検知 2026.02.05 2026.02.07 Literature Database
Clouding the Mirror: Stealthy Prompt Injection Attacks Targeting LLM-based Phishing Detection Authors: Takashi Koide, Hiroki Nakano, Daiki Chiba | Published: 2026-02-05 Indirect Prompt Injectionフィッシング検出手法Prompt Injection 2026.02.05 2026.02.07 Literature Database
BadTemplate: A Training-Free Backdoor Attack via Chat Template Against Large Language Models Authors: Zihan Wang, Hongwei Li, Rui Zhang, Wenbo Jiang, Guowen Xu | Published: 2026-02-05 LLM Performance Evaluationデータ毒性Large Language Model 2026.02.05 2026.02.07 Literature Database
Spider-Sense: Intrinsic Risk Sensing for Efficient Agent Defense with Hierarchical Adaptive Screening Authors: Zhenxiong Yu, Zhi Yang, Zhiheng Jin, Shuhe Wang, Heng Zhang, Yanlin Fei, Lingfeng Zeng, Fangqi Lou, Shuo Zhang, Tu Hu, Jingping Liu, Rongze Chen, Xingyu Zhu, Kunyi Wang, Chaofa Yuan, Xin Guo, Zhaowei Liu, Feipeng Zhang, Jie Huang, Huacan Wang, Ronghao Chen, Liwen Zhang | Published: 2026-02-05 攻撃手法の説明Content Specialized for Toxicity Attacks 2026.02.05 2026.02.07 Literature Database
SynAT: Enhancing Security Knowledge Bases via Automatic Synthesizing Attack Tree from Crowd Discussions Authors: Ziyou Jiang, Lin Shi, Guowei Yang, Xuyan Ma, Fenglong Li, Qing Wang | Published: 2026-02-05 LLM Performance EvaluationSafety of Data Generation攻撃ツリー合成 2026.02.05 2026.02.07 Literature Database
Hallucination-Resistant Security Planning with a Large Language Model Authors: Kim Hammar, Tansu Alpcan, Emil Lupu | Published: 2026-02-05 LLM Performance EvaluationHallucinationDetection of Hallucinations 2026.02.05 2026.02.07 Literature Database
Comparative Insights on Adversarial Machine Learning from Industry and Academia: A User-Study Approach Authors: Vishruti Kakkad, Paul Chung, Hanan Hibshi, Maverick Woo | Published: 2026-02-04 PoisoningModel Extraction Attack教育手法 2026.02.04 2026.02.06 Literature Database
How Few-shot Demonstrations Affect Prompt-based Defenses Against LLM Jailbreak Attacks Authors: Yanshu Wang, Shuaishuai Yang, Jingjing He, Tong Yang | Published: 2026-02-04 LLM Performance EvaluationPrompt InjectionLarge Language Model 2026.02.04 2026.02.06 Literature Database
Semantic Consensus Decoding: Backdoor Defense for Verilog Code Generation Authors: Guang Yang, Xing Hu, Xiang Chen, Xin Xia | Published: 2026-02-04 Security of Code GenerationBackdoor DetectionModel Extraction Attack 2026.02.04 2026.02.06 Literature Database
Attack-Resistant Uniform Fairness for Linear and Smooth Contextual Bandits Authors: Qingwen Zhang, Wenjia Wang | Published: 2026-02-04 Algorithm DesignRobust EstimationStatistical Methods 2026.02.04 2026.02.06 Literature Database