These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Large Language Models (LLMs) face threats from jailbreak prompts. Existing
methods for defending against jailbreak attacks are primarily based on
auxiliary models. These strategies, however, often require extensive data
collection or training. We propose LightDefense, a lightweight defense
mechanism targeted at white-box models, which utilizes a safety-oriented
direction to adjust the probabilities of tokens in the vocabulary, making
safety disclaimers appear among the top tokens after sorting tokens by
probability in descending order. We further innovatively leverage LLM's
uncertainty about prompts to measure their harmfulness and adaptively adjust
defense strength, effectively balancing safety and helpfulness. The
effectiveness of LightDefense in defending against 5 attack methods across 2
target LLMs, without compromising helpfulness to benign user queries,
highlights its potential as a novel and lightweight defense mechanism,
enhancing security of LLMs.