HyBNN: Quantifying and Optimizing Hardware Efficiency of Binary Neural Networks
- Publisher:
- IEEE
- Publication Type:
- Conference Proceeding
- Citation:
- 2023 IEEE 31st Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), 2023, 00, pp. 203
- Issue Date:
- 2023-07-10
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
1723280.pdf | Published version | 72.29 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Binary neural network BNN has recently presented a promising opportunity for deep learning inferences on resource constrained edge devices Using extreme data precision i e 1 bit weight and 1 bit activation BNN not only significantly reduces the network memory footprint but also trades massive multiply accumulate operations for much cheaper logical XNOR and population count operations However our investigation reveals that to achieve satisfactory accuracy gains state of the art SOTA BNNs such as FracBNN 4 and ReActNet 1 usually have to incorporate various auxiliary floating point AFP components and increase the model size which in turn degrades the hardware performance efficiency
Please use this identifier to cite or link to this item: