Infrared and visible image fusion via detail preserving adversarial learning

Publisher:
Elsevier BV
Publication Type:
Journal Article
Citation:
Information Fusion, 2020, 54, pp. 85-98
Issue Date:
2020-02-01
Filename Description Size
1-s2.0-S1566253519300314-main.pdfPublished version4.46 MB
Adobe PDF
Full metadata record
© 2019 Elsevier B.V. TargefTablets can be detected easily from the background of infrared images due to their significantly discriminative thermal radiations, while visible images contain textural details with high spatial resolution which are beneficial to the enhancement of target recognition. Therefore, fused images with abundant detail information and effective target areas are desirable. In this paper, we propose an end-to-end model for infrared and visible image fusion based on detail preserving adversarial learning. It is able to overcome the limitations of the manual and complicated design of activity-level measurement and fusion rules in traditional fusion methods. Considering the specific information of infrared and visible images, we design two loss functions including the detail loss and target edge-enhancement loss to improve the quality of detail information and sharpen the edge of infrared targets under the framework of generative adversarial network. Our approach enables the fused image to simultaneously retain the thermal radiation with sharpening infrared target boundaries in the infrared image and the abundant textural details in the visible image. Experiments conducted on publicly available datasets demonstrate the superiority of our strategy over the state-of-the-art methods in both objective metrics and visual impressions. In particular, our results look like enhanced infrared images with clearly highlighted and edge-sharpened targets as well as abundant detail information.
Please use this identifier to cite or link to this item: