These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Toxic content detection is crucial for online services to remove
inappropriate content that violates community standards. To automate the
detection process, prior works have proposed varieties of machine learning (ML)
approaches to train Language Models (LMs) for toxic content detection. However,
both their accuracy and transferability across datasets are limited. Recently,
Large Language Models (LLMs) have shown promise in toxic content detection due
to their superior zero-shot and few-shot in-context learning ability as well as
broad transferability on ML tasks. However, efficiently designing prompts for
LLMs remains challenging. Moreover, the high run-time cost of LLMs may hinder
their deployments in production. To address these challenges, in this work, we
propose BD-LLM, a novel and efficient approach to Bootstrapping and Distilling
LLMs for toxic content detection. Specifically, we design a novel prompting
method named Decision-Tree-of-Thought (DToT) to bootstrap LLMs' detection
performance and extract high-quality rationales. DToT can automatically select
more fine-grained context to re-prompt LLMs when their responses lack
confidence. Additionally, we use the rationales extracted via DToT to fine-tune
student LMs. Our experimental results on various datasets demonstrate that DToT
can improve the accuracy of LLMs by up to 4.6%. Furthermore, student LMs
fine-tuned with rationales extracted via DToT outperform baselines on all
datasets with up to 16.9\% accuracy improvement, while being more than 60x
smaller than conventional LLMs. Finally, we observe that student LMs fine-tuned
with rationales exhibit better cross-dataset transferability.