A one-layer recurrent neural network for constrained nonsmooth invex optimization

Publication Type:
Journal Article
Neural Networks, 2014, 50 pp. 79 - 89
Issue Date:
Full metadata record
Files in This Item:
Filename Description Size
1-s2.0-S0893608013002712-main.pdfPublished Version1.03 MB
Adobe PDF
Invexity is an important notion in nonconvex optimization. In this paper, a one-layer recurrent neural network is proposed for solving constrained nonsmooth invex optimization problems, designed based on an exact penalty function method. It is proved herein that any state of the proposed neural network is globally convergent to the optimal solution set of constrained invex optimization problems, with a sufficiently large penalty parameter. In addition, any neural state is globally convergent to the unique optimal solution, provided that the objective function and constraint functions are pseudoconvex. Moreover, any neural state is globally convergent to the feasible region in finite time and stays there thereafter. The lower bounds of the penalty parameter and convergence time are also estimated. Two numerical examples are provided to illustrate the performances of the proposed neural network. © 2013 Elsevier Ltd.
Please use this identifier to cite or link to this item: