Convergence acceleration for multiobjective sparse reconstruction via knowledge transfer
- Publication Type:
- Conference Proceeding
- Citation:
- Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2019, 11411 LNCS pp. 475 - 487
- Issue Date:
- 2019-01-01
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
© Springer Nature Switzerland AG 2019. Multiobjective sparse reconstruction (MOSR) methods can potentially obtain superior reconstruction performance. However, they suffer from high computational cost, especially in high-dimensional reconstruction. Furthermore, they are generally implemented independently without reusing prior knowledge from past experiences, leading to unnecessary computational consumption due to the re-exploration of similar search spaces. To address these problems, we propose a sparse-constraint knowledge transfer operator to accelerate the convergence of MOSR solvers by reusing the knowledge from past problem-solving experiences. Firstly, we introduce the deep nonlinear feature coding method to extract the feature mapping between the search of the current problem and a previously solved MOSR problem. Through this mapping, we learn a set of knowledge-induced solutions which contain the search experience of the past problem. Thereafter, we develop and apply a sparse-constraint strategy to refine these learned solutions to guarantee their sparse characteristics. Finally, we inject the refined solutions into the iteration of the current problem to facilitate the convergence. To validate the efficiency of the proposed operator, comprehensive studies on extensive simulated signal reconstruction are conducted.
Please use this identifier to cite or link to this item: