Adversarial examples in machine learning for images are widely publicized and
explored. Illustrations of misclassifications caused by slightly perturbed
inputs are abundant and commonly known (e.g., a picture of panda imperceptibly
perturbed to fool the classifier into incorrectly labeling it as a gibbon).
Similar attacks on deep learning (DL) for radio frequency (RF) signals and
their mitigation strategies are scarcely addressed in the published work. Yet,
RF adversarial examples (AdExs) with minimal waveform perturbations can cause
drastic, targeted misclassification results, particularly against spectrum
sensing/survey applications (e.g. BPSK is mistaken for 8-PSK). Our research on
deep learning AdExs and proposed defense mechanisms are RF-centric, and
incorporate physical world, over-the-air (OTA) effects. We herein present
defense mechanisms based on pre-training the target classifier using an
autoencoder. Our results validate this approach as a viable mitigation method
to subvert adversarial attacks against deep learning-based communications and
radar sensing systems.