Orthogonal Super Greedy Learning for Sparse Feedforward Neural Networks
- Publisher:
- IEEE COMPUTER SOC
- Publication Type:
- Journal Article
- Citation:
- IEEE Transactions on Network Science and Engineering, 2022, 9, (1), pp. 161-170
- Issue Date:
- 2022-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Orthogonal Super Greedy Learning for Sparse Feedforward Neural Networks.pdf | Published version | 1.3 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
The analytic approaches for feedforward neural network, e.g., Radial Basis Function (RBF), have some attractive characteristics, such as superior theoretical properties and faster numerical implementations. However, they still have several drawbacks. The primary defect is that their generation performance and computational complexity are susceptible to the influence of irrelevant hidden variables. Thus, how to alleviate this influence has become a crucial issue for the feedforward neural network. In this paper, we propose an Orthogonal Super Greedy learning (OSGL) method for hidden neurons selection. The OSGL selects more than one hidden neurons from a given network structure in a greedy strategy until an adequate sparse network has been constructed. Theoretical analyses show it can reach the optimal learning rate. Extensive empirical results demonstrate the superiority that the proposed method can produce an excellent generalization performance with a sparse and compact feature representation within feedforward networks.
Please use this identifier to cite or link to this item: