Calculation of Output Harmfulness

CTRAP: Embedding Collapse Trap to Safeguard Large Language Models from Harmful Fine-Tuning

Authors: Biao Yi, Tiansheng Huang, Baolei Zhang, Tong Li, Lihai Nie, Zheli Liu, Li Shen | Published: 2025-05-22
Alignment
Indirect Prompt Injection
Calculation of Output Harmfulness

SoK: Knowledge is All You Need: Accelerating Last Mile Delivery for Automated Provenance-based Intrusion Detection with LLMs

Authors: Wenrui Cheng, Tiantian Zhu, Chunlin Xiong, Haofei Sun, Zijun Wang, Shunan Jing, Mingqi Lv, Yan Chen | Published: 2025-03-05 | Updated: 2025-04-28
RAG
Calculation of Output Harmfulness
Attack Detection

Efficient Toxic Content Detection by Bootstrapping and Distilling Large Language Models

Authors: Jiang Zhang, Qiong Wu, Yiming Xu, Cheng Cao, Zheng Du, Konstantinos Psounis | Published: 2023-12-13
Prompting Strategy
Calculation of Output Harmfulness
Large Language Model

You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content

Authors: Xinlei He, Savvas Zannettou, Yun Shen, Yang Zhang | Published: 2023-08-10
Text Detoxification
Prompt leaking
Calculation of Output Harmfulness

Toxicity Detection with Generative Prompt-based Inference

Authors: Yau-Shian Wang, Yingshan Chang | Published: 2022-05-24
Prompting Strategy
Calculation of Output Harmfulness
Large Language Model