Stable and compact design of Memristive GoogLeNet Neural Network

Publisher:
Elsevier
Publication Type:
Journal Article
Citation:
Neurocomputing, 2021, 441, pp. 52-63
Issue Date:
2021-06-21
Filename Description Size
1-s2.0-S0925231221002290-main.pdf1.95 MB
Adobe PDF
Full metadata record
According to the requirements of edge intelligence for circuit volume, power consumption and computing performance, a Memristive GoogLeNet Neural Network (MGNN) circuit is designed using memristor which is a new device integrating storage and computing as the basic circuit element. This circuit adopts 1×1 convolution and multi-scale convolution feature fusion to reduce the number of layers required by the network while ensuring the recognition accuracy of circuit. In order to reduce the size of the memristor crossbars in the circuit, we design word-line pruning and bit-line pruning methods of Memristive Convolution (MC) layers. We also use the parameter distribution of the memristive neural network to further reduce the size of memristor crossbars. The Memristive Batch Normalization (MBN) layer and Memristive Dropout (MD) are merged into front MC layers according mathematical analysis for cutting the number of network layers and decreasing the power consumption of the circuit. We also design the channel optimization and layer optimization methods of MC layers which greatly reduce the negative effect of multi-state conductance of memristors on the accuracy, improve the stability of the circuit, and reduce the circuit volume and power consumption. Experiments show that this circuit can get 89.83% accuracy on the CIFAR-10 data set, and the power consumption of a single neuron is only 1.3μW. When the number of memristor multi-state conductance is 24=16, the accuracy of the MGNN circuit close to float MGNN can still be obtained.
Please use this identifier to cite or link to this item: