Content Moderation

PromptGuard: Soft Prompt-Guided Unsafe Content Moderation for Text-to-Image Models

Authors: Lingzhi Yuan, Xinfeng Li, Chejian Xu, Guanhong Tao, Xiaojun Jia, Yihao Huang, Wei Dong, Yang Liu, XiaoFeng Wang, Bo Li | Published: 2025-01-07
Content Moderation
Soft Prompt Optimization
Prompt Injection

Toxicity Detection towards Adaptability to Changing Perturbations

Authors: Hankun Kang, Jianhao Chen, Yongqi Li, Xin Miao, Mayi Xu, Ming Zhong, Yuanyuan Zhu, Tieyun Qian | Published: 2024-12-17 | Updated: 2025-01-08
Content Moderation
Label

Heuristic-Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models

Authors: Ma Teng, Jia Xiaojun, Duan Ranjie, Li Xinfeng, Huang Yihao, Chu Zhixuan, Liu Yang, Ren Wenqi | Published: 2024-12-08 | Updated: 2025-01-03
Content Moderation
Prompt Injection
Attack Method

On Calibration of LLM-based Guard Models for Reliable Content Moderation

Authors: Hongfu Liu, Hengguan Huang, Hao Wang, Xiangming Gu, Ye Wang | Published: 2024-10-14
LLM Performance Evaluation
Content Moderation
Prompt Injection

MoJE: Mixture of Jailbreak Experts, Naive Tabular Classifiers as Guard for Prompt Attacks

Authors: Giandomenico Cornacchia, Giulio Zizzo, Kieran Fraser, Muhammad Zaid Hameed, Ambrish Rawat, Mark Purcell | Published: 2024-09-26 | Updated: 2024-10-04
Guardrail Method
Content Moderation
Prompt Injection

Well, that escalated quickly: The Single-Turn Crescendo Attack (STCA)

Authors: Alan Aqrawi, Arian Abbasi | Published: 2024-09-04 | Updated: 2024-09-10
LLM Security
Content Moderation
Attack Method

Safeguarding AI Agents: Developing and Analyzing Safety Architectures

Authors: Ishaan Domkundwar, Mukunda N S, Ishaan Bhola | Published: 2024-09-03 | Updated: 2024-09-13
Content Moderation
Internal Review System
Safety Alignment

Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models

Authors: Bang An, Sicheng Zhu, Ruiyi Zhang, Michael-Andrei Panaitescu-Liess, Yuancheng Xu, Furong Huang | Published: 2024-09-01
LLM Performance Evaluation
Content Moderation
Prompt Injection

Efficient Detection of Toxic Prompts in Large Language Models

Authors: Yi Liu, Junzhe Yu, Huijia Sun, Ling Shi, Gelei Deng, Yuqi Chen, Yang Liu | Published: 2024-08-21 | Updated: 2024-09-14
Content Moderation
Prompt Injection
Model Performance Evaluation

BaThe: Defense against the Jailbreak Attack in Multimodal Large Language Models by Treating Harmful Instruction as Backdoor Trigger

Authors: Yulin Chen, Haoran Li, Yirui Zhang, Zihao Zheng, Yangqiu Song, Bryan Hooi | Published: 2024-08-17 | Updated: 2025-04-22
AI Compliance
LLM Security
Content Moderation