FTransCNN: Fusing Transformer and a CNN based on fuzzy logic for uncertain medical image segmentation

Publisher:
ELSEVIER
Publication Type:
Journal Article
Citation:
Information Fusion, 2023, 99
Issue Date:
2023-11-01
Filename Description Size
1-s2.0-S1566253523001963-main.pdfPublished version3.61 MB
Adobe PDF
Full metadata record
The accurate segmentation of medical images plays a crucial role in diagnosing and treating diseases. Although many methods now use multimodal joint segmentation, the joint use of segmentation features extracted by multiple models can lead to heterogeneity and uncertainty. Unreasonable fusion methods cannot exploit the advantages of multiple models and still lack good performance in segmentation. Therefore, this study proposes the FTransCNN model, which is composed of a convolutional neural network (CNN) and Transformer and is based on a fuzzy fusion strategy that jointly utilizes the features extracted by a CNN and Transformer through a new fuzzy fusion module. First, the CNN and Transformer act as the backbone network for parallel feature extraction. Second, channel attention is used to promote the global key information of Transformer to improve the feature representation ability, and spatial attention is used to enhance the local details of CNN features and suppress irrelevant regions. Third, the proposed model applies the Hadamard product to model fine-grained interactions between the two branches and uses the Choquet fuzzy integral to suppress heterogeneity and uncertainty in fused features. Fourth, FTransCNN employs fuzzy attention fusion module (FAFM) hierarchical upsampling to effectively capture both low-level spatial features and high-level semantic context. Finally, the new model obtains the final segmentation result by using the deconvolution and results in an improvement in segmentation. The experimental results on Chest X-ray and Kvasir-SEG dataset show that FTransCNN has better performance on segmentation tasks than the-state-of-the-art deep segmentation models.
Please use this identifier to cite or link to this item: