Automatic Machine Learning for Multi-Receiver CNN Technology Classifiers

Publisher:
Association for Computing Machinery (ACM)
Publication Type:
Conference Proceeding
Citation:
WiseML 2022 - Proceedings of the 2022 ACM Workshop on Wireless Security and Machine Learning, 2022, pp. 39-44
Issue Date:
2022-05-19
Full metadata record
Convolutional Neural Networks (CNNs) are one of the most studied family of deep learning models for signal classification, including modulation, technology, detection, and identification. In this work, we focus on technology classification based on raw I/Q samples collected from multiple synchronized receivers. As an example use case, we study protocol identification of Wi-Fi, LTE-LAA, and 5G NR-U technologies that coexist over the 5 GHzUnlicensed National Information Infrastructure (U-NII) bands. Designing and training accurate CNN classifiers involve significant time and effort that goes to fine-tuning a model's architectural settings (e.g., number of convolutional layers and their filter size) and determining the appropriate hyperparameter configurations, such as learning rate and batch size. We tackle the former by defining architectural settings themselves as hyperparameters. We attempt to automatically optimize these architectural parameters, along with other preprocessing (e.g., number of I/Q samples within each classifier input) and learning hyperparameters, by forming aHyperparameter Optimization (HyperOpt) problem, which we solve in a near-optimal fashion using the Hyperband algorithm. The resulting near-optimal CNN (OCNN) classifier is then used to study classification accuracy for OTA as well as simulations datasets, considering various SNR values. We show that using a larger number of receivers to construct multi-channel inputs for CNNs does not necessarily improve classification accuracy. Instead, this number should be defined as a preprocessing hyperparameter to be optimized via Hyperband. OTA results reveal that our OCNN classifiers improve classification accuracy by $24.58%$ compared to manually tuned CNNs. We also study the effect of min-max normalization of I/Q samples within each classifier's input on generalization accuracy over simulated datasets SNRs other than training set's SNR, and show an average of $108.05%$ improvement when I/Q samples are normalized.
Please use this identifier to cite or link to this item: