Dynamic group optimisation algorithm for training feed-forward neural networks
- Publisher:
- ELSEVIER
- Publication Type:
- Journal Article
- Citation:
- Neurocomputing, 2018, 314, pp. 1-19
- Issue Date:
- 2018-11-07
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
1-s2.0-S0925231218304193-main.pdf | Published version | 4.48 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Feed-forward neural networks are efficient at solving various types of problems. However, finding efficient training algorithms for feed-forward neural networks is challenging. The dynamic group optimisation (DGO) algorithm is a recently proposed half-swarm half-evolutionary algorithm, which exhibits a rapid convergence rate and good performance in searching and avoiding local optima. In this paper, we propose a new hybrid algorithm, FNNDGO that integrates the DGO algorithm into a feed-forward neural network. DGO plays an optimisation role in training the neural network, by tuning parameters to their optimal values and configuring the structure of feed-forward neural networks. The performance of the proposed algorithm was determined by comparing its performance with those of other training methods in solving two types of problems. The experimental results show that our proposed algorithm exhibits promising performance for solving real-world problems.
Please use this identifier to cite or link to this item: