These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In this work we ask whether it is possible to create a "universal" detector
for telling apart real images from these generated by a CNN, regardless of
architecture or dataset used. To test this, we collect a dataset consisting of
fake images generated by 11 different CNN-based image generator models, chosen
to span the space of commonly used architectures today (ProGAN, StyleGAN,
BigGAN, CycleGAN, StarGAN, GauGAN, DeepFakes, cascaded refinement networks,
implicit maximum likelihood estimation, second-order attention
super-resolution, seeing-in-the-dark). We demonstrate that, with careful pre-
and post-processing and data augmentation, a standard image classifier trained
on only one specific CNN generator (ProGAN) is able to generalize surprisingly
well to unseen architectures, datasets, and training methods (including the
just released StyleGAN2). Our findings suggest the intriguing possibility that
today's CNN-generated images share some common systematic flaws, preventing
them from achieving realistic image synthesis. Code and pre-trained networks
are available at https://peterwang512.github.io/CNNDetection/ .