Black-Box Optimizer with Stochastic Implicit Natural Gradient
- Publisher:
- Springer
- Publication Type:
- Conference Proceeding
- Citation:
- Machine Learning and Knowledge Discovery in Databases. Research Track, 2021, 12977, pp. 217-232
- Issue Date:
- 2021-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Lyu-Tsang2021_Chapter_Black-BoxOptimizerWithStochast.pdf | Published version | 1.59 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Black-box optimization is primarily important for many computationally intensive applications, including reinforcement learning (RL), robot control, etc. This paper presents a novel theoretical framework for black-box optimization, in which our method performs stochastic updates with an implicit natural gradient of an exponential-family distribution. Theoretically, we prove the convergence rate of our framework with full matrix update for convex functions under Gaussian distribution. Our methods are very simple and contain fewer hyper-parameters than CMA-ES [12]. Empirically, our method with full matrix update achieves competitive performance compared with one of the state-of-the-art methods CMA-ES on benchmark test problems. Moreover, our methods can achieve high optimization precision on some challenging test functions (e.g., l1 -norm ellipsoid test problem and Levy test problem), while methods with explicit natural gradient, i.e., IGO [21] with full matrix update can not. This shows the efficiency of our methods.
Please use this identifier to cite or link to this item: