Semantic Segmentation of Spatial Scenes Using Heterogeneous Imaging Sensor Fusion Utilizing Classical Machine Learning Methods and Deep Convolutional Neural Networks
Semantic segmentation is an important task in remote sensing, enabling the identification and mapping of land covers and object classes within spatial scenes. Data fusion plays a vital role in integrating multiple data sources or modalities, such as satellite imaging, airborne sensors, and ground measurements, to produce a more comprehensive and accurate representation of the Earth's surface. The proposed research aims to develop techniques for semantic segmentation of spatial scenes by leveraging the fusion of heterogeneous imaging sensors, incorporating classical machine learning methods and deep convolutional neural networks (CNNs).
The proposed research addresses the challenges of employing deep learning and CNNs in remote sensing applications, which requires combining traditional and advanced deep learning-based methods to overcome semantic segmentation issues and improve performance. By integrating classical machine learning algorithms with deep CNNs, the research seeks to leverage the strengths of both approaches for more accurate and robust semantic segmentation in remote sensing and to develop generalization capabilities. The practical applications of this research include improved identification and mapping of land covers and object classes within spatial scenes, benefiting fields such as environmental monitoring, urban planning, and land management