Exploring LLMs for Malware Detection: Review, Framework Design, and Countermeasure Approaches Authors: Jamal Al-Karaki, Muhammad Al-Zafar Khan, Marwan Omar | Published: 2024-09-11 LLM SecurityPrompt InjectionMalware Classification 2024.09.11 2025.05.27 Literature Database
CLNX: Bridging Code and Natural Language for C/C++ Vulnerability-Contributing Commits Identification Authors: Zeqing Qin, Yiwei Wu, Lansheng Han | Published: 2024-09-11 LLM Performance EvaluationProgram AnalysisPrompt Injection 2024.09.11 2025.05.27 Literature Database
DrLLM: Prompt-Enhanced Distributed Denial-of-Service Resistance Method with Large Language Models Authors: Zhenyu Yin, Shang Liu, Guangyuan Xu | Published: 2024-09-11 | Updated: 2025-01-13 DDoS Attack DetectionLLM Performance EvaluationPrompt Injection 2024.09.11 2025.05.27 Literature Database
AdaPPA: Adaptive Position Pre-Fill Jailbreak Attack Approach Targeting LLMs Authors: Lijia Lv, Weigang Zhang, Xuehai Tang, Jie Wen, Feng Liu, Jizhong Han, Songlin Hu | Published: 2024-09-11 LLM SecurityPrompt InjectionAttack Method 2024.09.11 2025.05.27 Literature Database
Exploring User Privacy Awareness on GitHub: An Empirical Study Authors: Costanza Alfieri, Juri Di Rocco, Paola Inverardi, Phuong T. Nguyen | Published: 2024-09-06 | Updated: 2024-09-10 Privacy ProtectionPrompt InjectionUser Activity Analysis 2024.09.06 2025.05.27 Literature Database
RACONTEUR: A Knowledgeable, Insightful, and Portable LLM-Powered Shell Command Explainer Authors: Jiangyi Deng, Xinfeng Li, Yanjiao Chen, Yijie Bai, Haiqin Weng, Yan Liu, Tao Wei, Wenyuan Xu | Published: 2024-09-03 LLM Performance EvaluationCybersecurityPrompt Injection 2024.09.03 2025.05.27 Literature Database
Membership Inference Attacks Against In-Context Learning Authors: Rui Wen, Zheng Li, Michael Backes, Yang Zhang | Published: 2024-09-02 Prompt InjectionMembership InferenceAttack Method 2024.09.02 2025.05.27 Literature Database
Unveiling the Vulnerability of Private Fine-Tuning in Split-Based Frameworks for Large Language Models: A Bidirectionally Enhanced Attack Authors: Guanzhong Chen, Zhenghan Qin, Mingxin Yang, Yajie Zhou, Tao Fan, Tianyu Du, Zenglin Xu | Published: 2024-09-02 | Updated: 2024-09-04 LLM SecurityPrompt InjectionAttack Method 2024.09.02 2025.05.27 Literature Database
ProphetFuzz: Fully Automated Prediction and Fuzzing of High-Risk Option Combinations with Only Documentation via Large Language Model Authors: Dawei Wang, Geng Zhou, Li Chen, Dan Li, Yukai Miao | Published: 2024-09-02 Option-Based FuzzingPrompt InjectionVulnerability Management 2024.09.02 2025.05.27 Literature Database
The Dark Side of Human Feedback: Poisoning Large Language Models via User Inputs Authors: Bocheng Chen, Hanqing Guo, Guangjing Wang, Yuanda Wang, Qiben Yan | Published: 2024-09-01 LLM Performance EvaluationPrompt InjectionPoisoning 2024.09.01 2025.05.27 Literature Database