Examining the authenticity of images has become increasingly important as
manipulation tools become more accessible and advanced. Recent work has shown
that while CNN-based image manipulation detectors can successfully identify
manipulations, they are also vulnerable to adversarial attacks, ranging from
simple double JPEG compression to advanced pixel-based perturbation. In this
paper we explore another method of highly plausible attack: printing and
scanning. We demonstrate the vulnerability of two state-of-the-art models to
this type of attack. We also propose a new machine learning model that performs
comparably to these state-of-the-art models when trained and validated on
printed and scanned images. Of the three models, our proposed model outperforms
the others when trained and validated on images from a single printer. To
facilitate this exploration, we create a dataset of over 6,000 printed and
scanned image blocks. Further analysis suggests that variation between images
produced from different printers is significant, large enough that good
validation accuracy on images from one printer does not imply similar
validation accuracy on identical images from a different printer.