These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
While diffusion models demonstrate a remarkable capability for generating
high-quality images, their tendency to `replicate' training data raises privacy
concerns. Although recent research suggests that this replication may stem from
the insufficient generalization of training data captions and duplication of
training images, effective mitigation strategies remain elusive. To address
this gap, our paper first introduces a generality score that measures the
caption generality and employ large language model (LLM) to generalize training
captions. Subsequently, we leverage generalized captions and propose a novel
dual fusion enhancement approach to mitigate the replication of diffusion
models. Our empirical results demonstrate that our proposed methods can
significantly reduce replication by 43.5% compared to the original diffusion
model while maintaining the diversity and quality of generations. Code is
available at https://github.com/HowardLi0816/dual-fusion-diffusion.