AB - © 2017 IEEE. In this paper, we present a novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity. The core idea is to equip each original convolutional layer with another low-cost collaborative layer (LCCL), and the element-wise multiplication of the ReLU outputs of these two parallel layers produces the layer-wise output. The combined layer is potentially more discriminative than the original convolutional layer, and its inference is faster for two reasons: 1) the zero cells of the LCCL feature maps will remain zero after element-wise multiplication, and thus it is safe to skip the calculation of the corresponding high-cost convolution in the original convolutional layer; 2) LCCL is very fast if it is implemented as a 1 × 1 convolution or only a single filter shared by all channels. Extensive experiments on the CIFAR-10, CIFAR-100 and ILSCRC-2012 benchmarks show that our proposed network structure can accelerate the inference process by 32% on average with negligible performance drop.
AU - Dong, X
AU - Huang, J
AU - Yang, Y
AU - Yan, S
DA - 2017/11/06
DO - 10.1109/CVPR.2017.205
EP - 1903
JO - Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
PY - 2017/11/06
SP - 1895
TI - More is less: A more complicated network with less inference complexity
VL - 2017-January
Y1 - 2017/11/06
Y2 - 2022/10/01
ER -