These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated learning (FL) enables multiple parties to collaboratively fine-tune
an large language model (LLM) without the need of direct data sharing. Ideally,
by training on decentralized data that is aligned with human preferences and
safety principles, federated instruction tuning can result in an LLM that could
behave in a helpful and safe manner. In this paper, we for the first time
reveal the vulnerability of safety alignment in FedIT by proposing a simple,
stealthy, yet effective safety attack method. Specifically, the malicious
clients could automatically generate attack data without involving manual
efforts and attack the FedIT system by training their local LLMs on such attack
data. Unfortunately, this proposed safety attack not only can compromise the
safety alignment of LLM trained via FedIT, but also can not be effectively
defended against by many existing FL defense methods. Targeting this, we
further propose a post-hoc defense method, which could rely on a fully
automated pipeline: generation of defense data and further fine-tuning of the
LLM. Extensive experiments show that our safety attack method can significantly
compromise the LLM's safety alignment (e.g., reduce safety rate by 70\%), which
can not be effectively defended by existing defense methods (at most 4\%
absolute improvement), while our safety defense method can significantly
enhance the attacked LLM's safety alignment (at most 69\% absolute
improvement).