These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Recent studies reveal that Large Language Models (LLMs) face challenges in
balancing safety with utility, particularly when processing long texts for NLP
tasks like summarization and translation. Despite defenses against malicious
short questions, the ability of LLMs to safely handle dangerous long content,
such as manuals teaching illicit activities, remains unclear. Our work aims to
develop robust defenses for LLMs in processing malicious documents alongside
benign NLP task queries. We introduce a defense dataset comprised of
safety-related examples and propose single-task and mixed-task losses for
instruction tuning. Our empirical results demonstrate that LLMs can
significantly enhance their capacity to safely manage dangerous content with
appropriate instruction tuning. Additionally, strengthening the defenses of
tasks most susceptible to misuse is effective in protecting LLMs against
processing harmful information. We also observe that trade-offs between utility
and safety exist in defense strategies, where Llama2, utilizing our proposed
approach, displays a significantly better balance compared to Llama1.
External Datasets
defense dataset
Diverse-Topic subset of Fu et al. (2023)
30k validation dataset of BeaverTails (Ji et al., 2023)