You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content Authors: Xinlei He, Savvas Zannettou, Yun Shen, Yang Zhang | Published: 2023-08-10 Text DetoxificationPrompt leakingCalculation of Output Harmfulness 2023.08.10 2025.05.28 Literature Database
Toxicity Detection with Generative Prompt-based Inference Authors: Yau-Shian Wang, Yingshan Chang | Published: 2022-05-24 Prompting StrategyCalculation of Output HarmfulnessLarge Language Model 2022.05.24 2025.05.28 Literature Database