Volume 33, Issue 7 e4503
RESEARCH ARTICLE

A hybrid attention mechanism for blind automatic modulation classification

Fan Jia

Corresponding Author

Fan Jia

School of Electronic Information Engineering, Beijing Jiaotong University, Beijing, China

Correspondence to:

Fan Jia, School of Electronic Information Engineering, Beijing Jiaotong University, Beijing, China.

Email: [email protected]

Search for more papers by this author
Yueyi Yang

Yueyi Yang

School of Electronic Information Engineering, Beijing Jiaotong University, Beijing, China

Search for more papers by this author
Junyi Zhang

Junyi Zhang

The 54th Research Institute of CETC, Shijiazhuang, China

Search for more papers by this author
Yong Yang

Yong Yang

The 54th Research Institute of CETC, Shijiazhuang, China

Search for more papers by this author
First published: 05 April 2022
Citations: 2

Funding information: the Foundation of the Hebei Key Laboratory of Electromagnetic Spectrum Cognition and Control, The Science Foundation of Ministry of Education(MOE) of China and China Mobile Communications Corporation, Grant/Award Number: MCM20200106

Abstract

Recently, deep leaning has been making great progress in automatic modulation classification, just like its success in computer vision. However, radio signals with harsh impairments (oscillator drift, clock drift, noise) would significantly degrade the performance of the existing classifiers. To overcome the problem and explore the depth reason, a hybrid attention convolution network is proposed to enhance the capability of feature extraction. First, a spatial transformer network module with long short-term memory is introduced to synchronize and normalize radio signals. Second, a channel attention module is constructed to weight and assemble feature maps, exploring global feature representations with more context-relevant information. By combining these two modules, a relatively lightweight classifier with complex convolution layer for final classification is further researched through visualization. Moreover, different structures of attention module are compared and optimized in detail. Experimental result shows that our proposed hybrid model achieves the best performance among all compared models when SNR is upper than urn:x-wiley:ett:media:ett4503:ett4503-math-00017 dB, and it peaks at 93.448urn:x-wiley:ett:media:ett4503:ett4503-math-0002 at 0 dB, 2.7% higher than that of CLDNN and 97.560urn:x-wiley:ett:media:ett4503:ett4503-math-0003 at 20 dB, 8.2% higher than that of ResNet. And our model can be more efficient after a trade-off between accuracy and model size.

DATA AVAILABILITY STATEMENT

The data that support the findings of this study are available from the corresponding author upon reasonable request.

The full text of this article hosted at iucr.org is unavailable due to technical difficulties.