Prompt Injection

Automating the Detection of Code Vulnerabilities by Analyzing GitHub Issues

Authors: Daniele Cipollone, Changjie Wang, Mariano Scazzariello, Simone Ferlin, Maliheh Izadi, Dejan Kostic, Marco Chiesa | Published: 2025-01-09
LLM Performance Evaluation
Prompt Injection
Vulnerability Management

SpaLLM-Guard: Pairing SMS Spam Detection Using Open-source and Commercial LLMs

Authors: Muhammad Salman, Muhammad Ikram, Nardine Basta, Mohamed Ali Kaafar | Published: 2025-01-09
LLM Performance Evaluation
Prompt Injection
Improvement of Learning

Jailbreaking Multimodal Large Language Models via Shuffle Inconsistency

Authors: Shiji Zhao, Ranjie Duan, Fengxiang Wang, Chi Chen, Caixin Kang, Jialing Tao, YueFeng Chen, Hui Xue, Xingxing Wei | Published: 2025-01-09
Text Shuffle Inconsistency
Prompt Injection
Attack Method

Exploring Large Language Models for Semantic Analysis and Categorization of Android Malware

Authors: Brandon J Walton, Mst Eshita Khatun, James M Ghawaly, Aisha Ali-Gombe | Published: 2025-01-08
Prompt Injection
Prompt Engineering
Malware Classification

PromptGuard: Soft Prompt-Guided Unsafe Content Moderation for Text-to-Image Models

Authors: Lingzhi Yuan, Xinfeng Li, Chejian Xu, Guanhong Tao, Xiaojun Jia, Yihao Huang, Wei Dong, Yang Liu, XiaoFeng Wang, Bo Li | Published: 2025-01-07
Content Moderation
Soft Prompt Optimization
Prompt Injection

RTLMarker: Protecting LLM-Generated RTL Copyright via a Hardware Watermarking Framework

Authors: Kun Wang, Kaiyan Chang, Mengdi Wang, Xinqi Zou, Haobo Xu, Yinhe Han, Ying Wang | Published: 2025-01-05
Prompt Injection
Watermark Robustness
Watermark Evaluation

GNSS/GPS Spoofing and Jamming Identification Using Machine Learning and Deep Learning

Authors: Ali Ghanbarzade, Hossein Soleimani | Published: 2025-01-04
GNSS Security
Prompt Injection
Label

Auto-RT: Automatic Jailbreak Strategy Exploration for Red-Teaming Large Language Models

Authors: Yanjiang Liu, Shuhen Zhou, Yaojie Lu, Huijia Zhu, Weiqiang Wang, Hongyu Lin, Ben He, Xianpei Han, Le Sun | Published: 2025-01-03
Framework
Prompt Injection
Attack Method

CySecBench: Generative AI-based CyberSecurity-focused Prompt Dataset for Benchmarking Large Language Models

Authors: Johan Wahréus, Ahmed Mohamed Hussain, Panos Papadimitratos | Published: 2025-01-02
LLM Performance Evaluation
Cybersecurity
Prompt Injection

Safeguarding Large Language Models in Real-time with Tunable Safety-Performance Trade-offs

Authors: Joao Fonseca, Andrew Bell, Julia Stoyanovich | Published: 2025-01-02
Framework
Prompt Injection
Safety Alignment