PRINCIPAL COMPONENTS VERSUS AUTOENCODERS FOR DIMENSIONALITY REDUCTION: A CASE OF SUPER-RESOLVED OUTPUTS FROM PRISMA HYPERSPECTRAL MISSION DATA
Keywords: PRISMA, Single-frame Super-resolution, Dimensionality Reduction, Artificial Intelligence, Quality Assessment
Abstract. This study attempts to solve these issues associated with hyperspectral (HS) data, i.e., coarse spatial resolution and high volume, by understanding the effect of deep learning and traditional dimensionality reduction on super-resolved products generated from the recently launched PRecursore IperSpettrale della Missione Applicativa (PRISMA) HS mission. Four single-frame super-resolution (SR) algorithms have been used to super-resolve a 30 m PRISMA scene of Ahmedabad, India and generate 15 m spatial resolution images with both spatial and spectral fidelity. Iterative back projection (IBP) and sparse representation (SIS) are the best and worst-performing SR algorithms following a comparative assessment and validation protocol. Next, denoising autoencoders and PCT computed using singular and eigenvalue decompositions have been executed on the original PRISMA, IBP and SIS-based super-resolved datasets. The resulting low-dimensional representations have been assessed to preserve the original dataset's topology using label-independent Lee and Verleysen's co-ranking matrix and loss of quality measure. Findings suggest that autoencoders are computationally expensive and require a higher neighbourhood size than PCT and its variants to produce a high-quality encoding. These insights remain significant for urban information extraction as there are few direct comparative assessments between machine learning-based linear and non-linear data compression methods in earlier studies.