Dynamic group optimisation algorithm for training feed-forward neural networks
- Publication Type:
- Journal Article
- Neurocomputing, 2018, 314, pp. 1-19
- Issue Date:
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Feed-forward neural networks are efficient at solving various types of problems. However, finding efficient training algorithms for feed-forward neural networks is challenging. The dynamic group optimisation (DGO) algorithm is a recently proposed half-swarm half-evolutionary algorithm, which exhibits a rapid convergence rate and good performance in searching and avoiding local optima. In this paper, we propose a new hybrid algorithm, FNNDGO that integrates the DGO algorithm into a feed-forward neural network. DGO plays an optimisation role in training the neural network, by tuning parameters to their optimal values and configuring the structure of feed-forward neural networks. The performance of the proposed algorithm was determined by comparing its performance with those of other training methods in solving two types of problems. The experimental results show that our proposed algorithm exhibits promising performance for solving real-world problems.
Please use this identifier to cite or link to this item: