These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Recent advancements in machine learning and computer vision have led to the
proliferation of Deepfakes. As technology democratizes over time, there is an
increasing fear that novice users can create Deepfakes, to discredit others and
undermine public discourse. In this paper, we conduct user studies to
understand whether participants with advanced computer skills and varying
levels of computer science expertise can create Deepfakes of a person saying a
target statement using limited media files. We conduct two studies; in the
first study (n = 39) participants try creating a target Deepfake in a
constrained time frame using any tool they desire. In the second study (n = 29)
participants use pre-specified deep learning-based tools to create the same
Deepfake. We find that for the first study, 23.1% of the participants
successfully created complete Deepfakes with audio and video, whereas, for the
second user study, 58.6% of the participants were successful in stitching
target speech to the target video. We further use Deepfake detection software
tools as well as human examiner-based analysis, to classify the successfully
generated Deepfake outputs as fake, suspicious, or real. The software detector
classified 80% of the Deepfakes as fake, whereas the human examiners classified
100% of the videos as fake. We conclude that creating Deepfakes is a simple
enough task for a novice user given adequate tools and time; however, the
resulting Deepfakes are not sufficiently real-looking and are unable to
completely fool detection software as well as human examiners