Literature Database

Structure-Preference Enabled Graph Embedding Generation under Differential Privacy

Authors: Sen Zhang, Qingqing Ye, Haibo Hu | Published: 2025-01-07
Privacy Protection
Equivalence Evaluation

LLM4CVE: Enabling Iterative Automated Vulnerability Repair with Large Language Models

Authors: Mohamad Fakih, Rahul Dharmaji, Halima Bouzidi, Gustavo Quiros Araya, Oluwatosin Ogundare, Mohammad Abdullah Al Faruque | Published: 2025-01-07
LLM Performance Evaluation
Prompt Engineering
Automated Vulnerability Remediation

RTLMarker: Protecting LLM-Generated RTL Copyright via a Hardware Watermarking Framework

Authors: Kun Wang, Kaiyan Chang, Mengdi Wang, Xinqi Zou, Haobo Xu, Yinhe Han, Ying Wang | Published: 2025-01-05
Prompt Injection
Watermark Robustness
Watermark Evaluation

A Statistical Hypothesis Testing Framework for Data Misappropriation Detection in Large Language Models

Authors: Yinpeng Cai, Lexin Li, Linjun Zhang | Published: 2025-01-05
Framework
Hypothesis Testing
Watermark Evaluation

BADTV: Unveiling Backdoor Threats in Third-Party Task Vectors

Authors: Chia-Yi Hsu, Yu-Lin Tsai, Yu Zhe, Yan-Lun Chen, Chih-Hsun Lin, Chia-Mu Yu, Yang Zhang, Chun-Ying Huang, Jun Sakuma | Published: 2025-01-04
Backdoor Attack
Defense Method

GNSS/GPS Spoofing and Jamming Identification Using Machine Learning and Deep Learning

Authors: Ali Ghanbarzade, Hossein Soleimani | Published: 2025-01-04
GNSS Security
Prompt Injection
Label

Leveraging Large Language Models and Machine Learning for Smart Contract Vulnerability Detection

Authors: S M Mostaq Hossain, Amani Altarawneh, Jesse Roberts | Published: 2025-01-04
LLM Performance Evaluation
Smart Contract

Towards Robust and Accurate Stability Estimation of Local Surrogate Models in Text-based Explainable AI

Authors: Christopher Burger, Charles Walter, Thai Le, Lingwei Chen | Published: 2025-01-03
Experimental Validation

Mingling with the Good to Backdoor Federated Learning

Authors: Nuno Neves | Published: 2025-01-03
Backdoor Attack
Poisoning

Auto-RT: Automatic Jailbreak Strategy Exploration for Red-Teaming Large Language Models

Authors: Yanjiang Liu, Shuhen Zhou, Yaojie Lu, Huijia Zhu, Weiqiang Wang, Hongyu Lin, Ben He, Xianpei Han, Le Sun | Published: 2025-01-03
Framework
Prompt Injection
Attack Method