Full-Resolution Lung Nodule Localization From Chest X-Ray Images Using Residual Encoder-Decoder Networks

Publisher:
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Publication Type:
Journal Article
Citation:
IEEE Access, 2023, 11, (99), pp. 143016-143036
Issue Date:
2023-01-01
Full metadata record
Lung cancer is the leading cause of cancer death, and early diagnosis is associated with a positive prognosis. Chest X-ray (CXR) provides an inexpensive imaging mode for lung cancer diagnosis. Computer vision algorithms have previously been proposed to assist human radiologists in this task; however, leading studies use down-sampled images and computationally expensive methods with unproven generalization. In contrast, this study localizes lung nodules from CXR images using efficient encoder-decoder neural networks that have been crafted to process full resolution input images, thereby avoiding signal loss resulting from down-sampling. Encoder-decoder networks are trained and tested using the Japanese Society of Radiological Technology dataset. The networks are used to localize lung nodules from an independent CXR dataset. These experiments allow for the determination of the optimal network depth, image resolution, and pre-processing pipeline for generalized lung nodule localization. We find that more subtle nodules are detected in earlier training epochs. Therefore, we propose a novel self-ensemble model from three consecutive epochs centered on the validation optimum. This ensemble achieved a sensitivity of 85% in 10-fold internal testing with false positives of 8 per image. A sensitivity of 81% is achieved at a false positive rate of 6 following morphological false positive reduction. This result is comparable to more computationally complex systems, but with a sub-second inference time that is faster than other methods presented in the literature. The proposed algorithm achieved excellent generalization results against a challenging external dataset with a sensitivity of 77% at a false positive rate of 7.6.
Please use this identifier to cite or link to this item: