These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Recently, many deep neural networks (DNN) based modulation classification
schemes have been proposed in the literature. We have evaluated the robustness
of two famous such modulation classifiers (based on the techniques of
convolutional neural networks and long short term memory) against adversarial
machine learning attacks in black-box settings. We have used Carlini \& Wagner
(C-W) attack for performing the adversarial attack. To the best of our
knowledge, the robustness of these modulation classifiers has not been
evaluated through C-W attack before. Our results clearly indicate that
state-of-art deep machine learning-based modulation classifiers are not robust
against adversarial attacks.