Literature Database

MLLMGuard: A Multi-dimensional Safety Evaluation Suite for Multimodal Large Language Models

Authors: Tianle Gu, Zeyang Zhou, Kexin Huang, Dandan Liang, Yixu Wang, Haiquan Zhao, Yuanqi Yao, Xingge Qiao, Keqing Wang, Yujiu Yang, Yan Teng, Yu Qiao, Yingchun Wang | Published: 2024-06-11 | Updated: 2024-06-13
LLM Performance Evaluation
Dataset Generation
Evaluation Method

Ollabench: Evaluating LLMs’ Reasoning for Human-centric Interdependent Cybersecurity

Authors: Tam n. Nguyen | Published: 2024-06-11
LLM Performance Evaluation
Cybersecurity
Evaluation Method

A Survey of Recent Backdoor Attacks and Defenses in Large Language Models

Authors: Shuai Zhao, Meihuizi Jia, Zhongliang Guo, Leilei Gan, Xiaoyu Xu, Xiaobao Wu, Jie Fu, Yichao Feng, Fengjun Pan, Luu Anh Tuan | Published: 2024-06-10 | Updated: 2025-01-04
LLM Security
Backdoor Attack

An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection

Authors: Shenao Yan, Shen Wang, Yue Duan, Hanbin Hong, Kiho Lee, Doowon Kim, Yuan Hong | Published: 2024-06-10
LLM Security
Backdoor Attack
Prompt Injection

Robust Distribution Learning with Local and Global Adversarial Corruptions

Authors: Sloan Nietert, Ziv Goldfeld, Soroosh Shafiee | Published: 2024-06-10 | Updated: 2024-06-24
Watermarking
Loss Function
Evaluation Method

LLM Dataset Inference: Did you train on my dataset?

Authors: Pratyush Maini, Hengrui Jia, Nicolas Papernot, Adam Dziedzic | Published: 2024-06-10
LLM Security
Data Privacy Assessment
Membership Inference

SecureNet: A Comparative Study of DeBERTa and Large Language Models for Phishing Detection

Authors: Sakshi Mahendru, Tejul Pandit | Published: 2024-06-10
LLM Performance Evaluation
Phishing Detection
Prompt Injection

Siren — Advancing Cybersecurity through Deception and Adaptive Analysis

Authors: Samhruth Ananthanarayanan, Girish Kulathumani, Ganesh Narayanan | Published: 2024-06-10 | Updated: 2025-04-24
Cybersecurity
Proactive Defense
Cryptography

Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning

Authors: Xiaoting Lyu, Yufei Han, Wei Wang, Jingkai Liu, Yongsheng Zhu, Guangquan Xu, Jiqiang Liu, Xiangliang Zhang | Published: 2024-06-10
Backdoor Attack
Poisoning

A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks

Authors: Hengzhu Liu, Ping Xiong, Tianqing Zhu, Philip S. Yu | Published: 2024-06-10
Backdoor Attack
Poisoning
Membership Inference