Document Type
Article
Publication Date
6-17-2021
Abstract
In previous works, we have shown the efficacy of using Deep Belief Networks, paired with clustering, to identify distinct classes of objects within remotely sensed data via cluster analysis and qualitative analysis of the output data in comparison with reference data. In this paper, we quantitatively validate the methodology against datasets currently being generated and used within the remote sensing community, as well as show the capabilities and benefits of the data fusion methodologies used. The experiments run take the output of our unsupervised fusion and segmentation methodology and map them to various labeled datasets at different levels of global coverage and granularity in order to test our models’ capabilities to represent structure at finer and broader scales, using many different kinds of instrumentation, that can be fused when applicable. In all cases tested, our models show a strong ability to segment the objects within input scenes, use multiple datasets fused together where appropriate to improve results, and, at times, outperform the pre-existing datasets. The success here will allow this methodology to be used within use concrete cases and become the basis for future dynamic object tracking across datasets from various remote sensing instruments.
Recommended Citation
LaHaye N.; Garay M. J.; Bue, B. D.; El-Askary, H.; Linstead, E. A Quantitative Validation of Multi-Modal Image Fusion and Segmentation for Object Detection and Tracking. Remote Sens. 2021, 13, 2364. http://doi.org/10.3390/rs13122364
Peer Reviewed
1
Copyright
The authors
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Included in
Environmental Monitoring Commons, Numerical Analysis and Scientific Computing Commons, Other Computer Engineering Commons, Other Computer Sciences Commons, Other Electrical and Computer Engineering Commons, Remote Sensing Commons
Comments
This article was originally published in Remote Sensing, volume 13, in 2021. http://doi.org/10.3390/rs13122364