Recent advancements in radio frequency machine learning (RFML) have
demonstrated the use of raw in-phase and quadrature (IQ) samples for multiple
spectrum sensing tasks. Yet, deep learning techniques have been shown, in other
applications, to be vulnerable to adversarial machine learning (ML) techniques,
which seek to craft small perturbations that are added to the input to cause a
misclassification. The current work differentiates the threats that adversarial
ML poses to RFML systems based on where the attack is executed from: direct
access to classifier input, synchronously transmitted over the air (OTA), or
asynchronously transmitted from a separate device. Additionally, the current
work develops a methodology for evaluating adversarial success in the context
of wireless communications, where the primary metric of interest is bit error
rate and not human perception, as is the case in image recognition. The
methodology is demonstrated using the well known Fast Gradient Sign Method to
evaluate the vulnerabilities of raw IQ based Automatic Modulation
Classification and concludes RFML is vulnerable to adversarial examples, even
in OTA attacks. However, RFML domain specific receiver effects, which would be
encountered in an OTA attack, can present significant impairments to
adversarial evasion.