Hyperspectral Image Analysis: Learning from Multiple Modalities

Publication Type:
Thesis
Issue Date:
2025
Full metadata record
This thesis advances hyperspectral imaging (HSI) analysis by addressing key challenges in dimensionality reduction, cross-scene knowledge transfer, and multimodal fusion. HSI provides rich spectral information but suffers from high dimensionality, lack of altitude cues, limited labeled data, and difficulty in transferring knowledge across heterogeneous scenes. To overcome these issues, three novel models are proposed. First, the Cross-Scene Knowledge Integration (CKI) model aligns spectral characteristics across scenes within a low-dimensional, domain-agnostic space, using a Source Similarity Mechanism to weight source relevance and a Complementary Information Integration module to distill residual scene-specific cues, enabling efficient label-sparse transfer. Second, the Agreement–Disagreement Guided Knowledge Transfer (ADGKT) model enhances optimization stability by coordinating gradient directions through an agreement branch while preserving target-specific diversity via a disagreement branch and ensemble regularization. Third, the Interaction Fusion (IF) model integrates LiDAR with center-patch HSI to address the absence of altitude information. A learnable fusion matrix adaptively weights intra- and inter-modal interactions, revealing subtle spatial–spectral structures. Overall, this research contributes a unified framework that enables robust, interpretable, and label-efficient cross-scene and multimodal learning for hyperspectral imaging applications.
Please use this identifier to cite or link to this item: