Cross-domain learning for underwater image enhancement
- Publisher:
- Elsevier
- Publication Type:
- Journal Article
- Citation:
- Signal Processing: Image Communication, 2023, 110, pp. 116890
- Issue Date:
- 2023-01-01
Embargoed
Filename | Description | Size | |||
---|---|---|---|---|---|
2-s2.0-85141928593 AM.pdf | Accepted version | 24.79 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is currently unavailable due to the publisher's embargo.
The poor quality of underwater images has become a widely-known cause affecting the performance of the underwater development projects, including mineral exploitation, driving photography, and navigation for autonomous underwater vehicles. In recent years, deep learning-based techniques have achieved remarkable successes in image restoration and enhancement tasks. However, the limited availability of paired training data (underwater images and their corresponding clear images) and the requirement for vivid color correction remain challenging for underwater image enhancement, as almost all learning-based methods require paired data for training. In this study, instead of creating the time-consuming paired data, we explore the unsupervised training strategy. Specifically, we introduce a universal cross-domain GAN-based framework to generate high-quality images to address the dependence on paired training data. To ensure the vivid colorfulness, the color loss is designed to constrain the training process. Also, a feature fusion module (FFM) is proposed to increase the capacity of the whole model as well as the dual discriminator channel adopted in the architecture. Extensive quantitative and perceptual experiments show that our approach overcomes the limitation of paired data and obtains superior performance over the state-of-the-art on several underwater benchmarks in terms of both accuracy and model deployment.
Please use this identifier to cite or link to this item: