These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Large language models pretrained on a huge amount of data capture rich
knowledge and information in the training data. The ability of data
memorization and regurgitation in pretrained language models, revealed in
previous studies, brings the risk of data leakage. In order to effectively
reduce these risks, we propose a framework DEPN to Detect and Edit Privacy
Neurons in pretrained language models, partially inspired by knowledge neurons
and model editing. In DEPN, we introduce a novel method, termed as privacy
neuron detector, to locate neurons associated with private information, and
then edit these detected privacy neurons by setting their activations to zero.
Furthermore, we propose a privacy neuron aggregator dememorize private
information in a batch processing manner. Experimental results show that our
method can significantly and efficiently reduce the exposure of private data
leakage without deteriorating the performance of the model. Additionally, we
empirically demonstrate the relationship between model memorization and privacy
neurons, from multiple perspectives, including model size, training time,
prompts, privacy neuron distribution, illustrating the robustness of our
approach.