Deepfakes are synthetically generated media often devised with malicious
intent. They have become increasingly more convincing with large training
datasets advanced neural networks. These fakes are readily being misused for
slander, misinformation and fraud. For this reason, intensive research for
developing countermeasures is also expanding. However, recent work is almost
exclusively limited to deepfake detection - predicting if audio is real or
fake. This is despite the fact that attribution (who created which fake?) is an
essential building block of a larger defense strategy, as practiced in the
field of cybersecurity for a long time. This paper considers the problem of
deepfake attacker attribution in the domain of audio. We present several
methods for creating attacker signatures using low-level acoustic descriptors
and machine learning embeddings. We show that speech signal features are
inadequate for characterizing attacker signatures. However, we also demonstrate
that embeddings from a recurrent neural network can successfully characterize
attacks from both known and unknown attackers. Our attack signature embeddings
result in distinct clusters, both for seen and unseen audio deepfakes. We show
that these embeddings can be used in downstream-tasks to high-effect, scoring
97.10% accuracy in attacker-id classification.