Intelligent Jamming of Deep Neural Network Based Signal Classification for Shared Spectrum

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM), 2021, 2021-November, pp. 987-992
Issue Date:
2021-12-30
Full metadata record
Deep neural networks (DNNs) have recently been applied in the classification of radio frequency (RF) signals. One use case of interest relates to the discernment between different wireless technologies that share the spectrum. Although highly accurate DNN classifiers have been proposed, preliminary research points to the vulnerability of these classifiers to adversarial machine learning (AML) attacks. In one such attack, a surrogate DNN model is trained by the attacker to produce intelligently crafted low-power “perturbations” that degrade the classification accuracy of the legitimate classifier. In this paper, we design four DNN-based classifiers for the identification of Wi-Fi, 5G NR-Unlicensed (NR-U), and LTE LAA transmissions over the 5 GHz UNII bands. Our DNN models include both convolutional neural networks (CNNs) as well as several recurrent neural networks (RNNs) models, particularly LSTM and Bidirectional LSTM (BiLSTM) networks. We demonstrate the high classification accuracy of these models under “benign” (non-adversarial) noise. We then study the efficacy of these classifiers under AML-based perturbations. Specifically, we use the fast gradient sign method (FGSM) to generate adversarial perturbations. Different attack scenarios are studied, depending on how much information the attacker has about the defender's classifier. In one extreme scenario, called “white-box” attack, the attacker has full knowledge of the defender's DNN, including its hyperparameters, its training dataset, and even the seeds used to train the network. This attack is shown to significantly degrade the classification accuracy even when the FGSM-based perturbations are low power, i.e., the received SNR is relatively high. We then consider more realistic attack scenarios, where the attacker has partial or no knowledge of the defender's classifier. Even under limited knowledge, adversarial perturbations can still lead to significant reduction in the classification accuracy, relative to classification under AWGN with the same SNR level.
Please use this identifier to cite or link to this item: