ENHANCEMENT OF SPATIAL RESOLUTION OF THE LROC WIDE ANGLE CAMERA IMAGES

Image fusion, a popular method for resolution enhancement in Earth-based remote sensing studies involves the integration of geometric (sharpness) detail of a high-resolution panchromatic (Pan) image and the spectral information of a lower resolution multi-spectral (MS) image. Image fusion with planetary images is not as widespread as with terrestrial studies, although successful application of image fusion can lead to the generation of higher resolution MS image data. A comprehensive comparison of six image fusion algorithms in the context of lunar images is presented in this work. Performance of these algorithms is compared by visual inspection of the high-resolution multi-spectral products, derived products such as band-to-band ratio and composite images, and performance metrics with an emphasis on spectral content preservation. Enhanced MS images of the lunar surface can enable new science and maximize the science return for current and future missions.


INTRODUCTION
Planetary imaging by orbiters is often performed using two separate instruments: a high-resolution Pan camera, and a lower resolution MS camera or hyper-spectral (HS) imager.Small pixel footprint cameras (small instantaneous field-of-view (IFOV), highresolution) usually require a wider spectral bandwidth for shorter exposures (to avoid smear); likewise a narrow spectral band sensing element results in a larger pixel scale (larger IFOV).Typically the design of planetary imaging instruments is more restrictive in terms of volume and mass constraints than Earth-based examples.High-resolution imagers are typically large in size and mass, so in addition to the IFOV vs. bandwidth tradeoff, there are limitations on physical dimensions and mass of the final instrument based on the payload for a mission.If a high-resolution multi-spectral (HRMS) imager were to be deployed on a planetary mission, the corresponding data volume would likely outstrip typical planetary spacecraft onboard mass storage and downlink capabilities (Zhang, 2004).
A popular method in Earth-based applications for resolution enhancement involves merging of Pan and MS images in a process known as image fusion; a procedure that integrates geometric (sharpness) detail of a high-resolution Pan image and the spectral information of a low-resolution MS image (Pohl andVan Genderen, 1998, Zhang, 2004).Typically a MS image contains lower frequency (colors and tones) information relative to a Pan image that shows higher frequency details (edges, boundaries).For example if the source scene has sharp edges, then those edges are present in the MS bands but not strongly represented (blurred or fuzzy), whereas in the Pan frame the edges are sharply resolved.In image fusion, the goal is to transfer the high-frequency information of the Pan into the MS image with minimal change to its original low-frequency content.
Since the design of high-resolution multi-spectral instruments is challenging and cost-prohibitive in planetary imaging, the option of improving the resolution of spectral information by postprocessing methods is attractive.A bundled high-resolution Pan In the last decade, Pan and MS imaging of the lunar surface by several orbiters (Lunar Reconnaissance Orbiter (Chin et al., 2007), Kaguya (Haruyama et al., 2008), Chandrayaan-1 (Goswami and Annadurai, 2009) and Chang'E 1-3 (Xiao, 2014)) and the availability of modern image analysis techniques and computational resources have opened new opportunities for applying image fusion techniques to planetary observations.Investigating the performance of existing image fusion methods on existing lunar images enables advancing the state-of-art for planetary image fusion methods.In this work, we perform image fusion for the first time with Lunar Reconnaissance Orbiter Camera (LROC) (Robinson et al., 2010) Narrow Angle Camera (NAC) and LROC Wide Angle Camera (WAC) images.
Our work also provides the first comparative performance evaluation of six well known image fusion methods across three classes of image fusion methods for lunar images: (1) Intensity-Hue-Saturation (IHS), (2) Brovey Transform (BT), (3) Principal Component Analysis (PCA), (4) University of New Brunswick (UNB), (5) High Pass Filter (HPF) and (6) Additive Wavelet (AWT).The performances of image fusion methods are assessed both qualitatively (visual examination) and quantitatively (via well known image fusion performance metrics).Preservation of spectral information is considered to be more important in our performance analysis.
This paper is organized as follows: The six image fusion meth-ods being compared are first discussed and compared followed by a description of the raw images.The method of performance evaluation is discussed next followed by discussion of the results.

CLASSIFICATION AND OVERVIEW OF IMAGE FUSION METHODS
Image fusion methods are classified based on the scheme of injecting sharpness details into the MS bands.The component substitution methods (e.g.IHS and PCA) use algebraic transforms to segregate the brightness, intensity and luminance components from the composite (e.g.color) MS image and replaces the intensity component with the Pan image.In the Intensity-Hue-Saturation (IHS) fusion (Gillesfie et al., 1986, Welch andEhlers, 1987) (Gillespie et al., 1987), HPF (Gangkofner et al., 2008) and UNB methods (Zhang, 1999) fall into this class.The difference in these methods lies in the LRPI synthesis.For BT, the average of the MS bands is used as LRPI; for HPF, a boxcar-filtered Pan is the LRPI, and for UNB, the LRPI is constructed via least-squaresregression as a linear combination of MS bands that spectrally overlap the Pan.
The third category of image fusion methods are Multi-Resolution Analysis (MRA) based methods where the MS and Pan images are first decomposed into coarse and fine components in the scalespace domain and then spatial details are injected at fine scales from the Pan image while maintaining spectral details of the MS.Wavelets and other MRA techniques are used in this class of image fusion.The Additive Wavelet method (Nünez et al., 1999), where a normalized wavelet plane of the Pan is added to the corresponding MS wavelet plane at each decomposition level is implemented and used in this work.All image fusion methods used here can be explained as a form of a generalized image fusion method (Table 1) where high spatial-frequency information from Pan is integrated into the MS bands to form the HRMS as per the following: In the above expression MS k ↑ indicates the up-sampled MS images, values of α k and Φ k change with the image fusion algorithm (Table 1).
The Visible (Vis) and Ultra-Violet (UV) bands are considered separate MS stacks during image fusion to fix the ratio between Pan and MS resolution at 4:1 (target spatial enhancement is 4 times).Pixel scales of the Pan and MS images used for image fusion were: NAC at 16 m/px and WAC-Vis at 64 m/px, and NAC at 64 m/px and WAC-UV at 256 m/px.It must be noted that UV is at a 4x lower pixel scale than Vis and this effects the resolution of images derived by combining Vis and UV.The pixel scale for the NAC images is 0.5 m/px (from 50-km altitude) so the Pan images are re-sampled versions of the original NAC.
The two example lunar images are of the Ina-D, a volcanic feature first discovered in Apollo era photography, and South Ray crater which is an young impact crater explored during the Apollo 16 mission.Observations used for Ina-D were NAC: M1173023278 (left and right pair); WAC: M165808188CE.For the South Ray target, the observations used were NAC: M1182366809 and WAC: M144524970CE.NAC pair images and frames in the WAC observation were mosaicked prior to obtaining the region-of-interest for each target geographical area.
Ina-D (or Ina) was thought to have formed as late basaltic eruptions (Strain and El-Baz, 1980), its age was is proposed to be < 100 my (Braden et al., 2014).Two distinct units characterize Ina: a rough floor material thought to underlie a relatively smooth unit delimited with steep lobate margins.Inas sharp topographic contrast results in images rich with spatial details (the smooth units contribute more to the spectral details) -an increased spatial resolution via image fusion would better characterize the two units and the color signatures observable in ratio and/or color composite images.The second target (South Ray crater, 700 m diameter) is a young crater (Copernican era) located in the highland terrain, 3.9 km south of Apollo 16 landing site.The higher albedo (immature) ejecta rays is in sharp contrast to the surrounding mature, lower albedo surface.The ratio images 321 nm /415 nm show variations related to the maturity of regolith (Denevi et al., 2014).An improvement in spatial detail could lead to a refined characterization of ejecta rays providing better insight to ejecta emplacement mechanisms.

PERFORMANCE ANALYSIS
Absolute performance of an image fusion method depends on the specific intended use of the resulting HRMS product -whether spatial or spectral details are more important and the degree of compromise permissible between the two aspects.Only basic forms of known algorithms are implemented in this work and no tuning was performed for enhancing spatial or spectral performance.During evaluation, both qualitative (visual characterization) and quantitative (metric-based characterization) criteria are used.Quality of two HRMS bands (UV 1 and Vis 5) and products derived from the HRMS bands (band ratio and falsecolor composite image products), are analyzed visually.Spatial Pan − Pan * h 0 enhancement and is assessed from the individual high-resolution bands (Figures 4, 5) while visible spectral irregularities are more evident in the ratio (Figure 6) and composite (Figure 7) images.
The UV 1 to Vis 1 (321/415 nm) band ratio was specifically chosen due to its known applications in spectral characterization of the lunar surface.The ratio follows Ti02 abundance in lunar soils (Robinson et al., 2011), was used to classify areas of basaltic magmatism on lunar mare plains, which resulted in irregular mare patches (such as Ina-D (Braden et al., 2014)), and used to classify mare units (Boyd et al., 2012).The (321/415 nm) ratio also allows for analysis of surfaces with increased exposure to space weathering -a diagnostic feature for the youngest lunar craters, and possible helpful indicator of the relative age of Copernican craters (Denevi et al., 2011).
Composite (false color) images are effective in conveying relative spectral content of three bands at any specific location (x,y).Specific band choices for composite images can accentuate spectral signatures specific to mineralogic context exposure to space weathering effects.For example, lunar swirls are mapped using a WAC composite (red = 415 nm, blue = 321/415 nm, and green = 321/360 nm) that reveals the locations of the swirls (Denevi et al., 2015).A more general false-color mapping is used in this workthe three selected bands are the two extreme LROC WAC filters -UV 1 (321 nm) and Vis 5 (689 nm), and the smallest visible wavelength band, Vis 1 (415 nm).
Quantitative performance evaluation is affected by computing the values of five image fusion quality metrics (Tables 3 and 4) : (1) Average Gradient (AG) -average magnitude of the image gradient computed in the row and column directions; larger AG implies higher spatial resolution (Li et al., 2005); (2) ERGAS -French acronym for Relative Dimensionless Global Error in Synthesis (Wald, 2002); ERGAS is zero for distortion-free image fusion; (3) Spectral Angle Mapper (SAM) -denotes the absolute value of the angle between true and estimated spectral vectors (Yuhas et al., 1992); SAM is zero for no spectral mismatch; (4) Universal Image Quality Index (UIQI (Wang and Bovik, 2002)) -UIQI models image distortion as a product of three factors: loss of correlation, radiometric distortion, and contrast distortion); and (5) Spectral Distortion Index (SD (Alparone et al., 2008)) which is the p-norm of the deviation between the pairwise similarity matrix constructed from the MS and the HRMS bands; SD is zero for no spectral distortion.
In the description of metrics the following symbols and operators are used: E [•] computes mean of all elements in an array/raster, CC computes the correlation coefficient -between two rasters (single value) or between two raster stacks (e.g. between MS and HRMS resulting), RMSE k is the root-mean-square deviation of the kth band of the test image from the reference image, g is a 2-dimensional laplacian operator, ∇F is the gradient for a raster (or raster stack) F, F is the norm for the raster stack computed at each (x,y) and F k,s is a standardized raster (after mean subtraction) for the kth band.A reference raster (or raster stack, e.g. the original MS band(S)) is denoted by R and r, f are reference and test spectral vectors obtained at a given location (x,y).Suffixes are used to denote specific values of mean µ and standard deviation σ .
In addition to the overall values of quantitative evaluation metrics, the performance of the image fusion algorithms is further judged based on Wald's protocol (Wald et al., 1997) and band similarity based spectral quality of the fused image (Pradhan et al., 2006).
In particular, we check the consistency property (necessary condition) for image fusion from Wald's protocol that implies that the synthesized MS image, when degraded to its original resolution (degraded HRMS -DHRMS), should resemble the original MS image.We compare the original MS WAC image and the DHRMS via cross-correlation (CC: Tables 5, 6) and UIQI (Tables 7, 8) metrics for both Ina-D and South Ray.
The necessary condition requires a reference for comparison and does not test fusion performance at HRMS resolution.For preservation of spectral properties, relationship between bands must be maintained such that pairwise correlation or similarity between any two bands in the MS image is unaltered by image fusion (Pradhan et al., 2006).While CC is a standard similarity measure, CC is insensitive to local changes in average signal level and contrast when computed pairwise for bands (Alparone et al., 2008).Hence we adopt the method used by Alparone et. al. (2008) and compute pairwise UIQI for bands before and after the image fusion to the compute the Spectral Distortion Index (Tables 3,4).

RESULTS AND DISCUSSION
Improvement in spatial resolution of WAC images is evident by visually comparing the synthesized HRMS images (Vis 5 and UV 1 band images, Figures 4 and 5 respectively) from each algorithm to the corresponding 'before' images (Figures 1 and 2).The Pan image (LROC NAC; Figure 1A for Ina-D and Figure 2A for South Ray) provides a visual reference for the theoretical maximum spatial resolution of HRMS product.The UV 1 HRMS (Figure 5A and 5B) images contain more high frequency spatial detail than the original images (Figure 1B and 2B) for both Ina-D and South Ray crater.Note that pixel resolution for the WAC UV bands is 4 times coarser compared to the visible bands; at the original resolutions both Ina-D and South Ray crater have barely recognizable features.For example. in the Ina-D HRMS, the smaller craters in the north and south can be identified and for the South Ray HRMS, the structure of rays is sharper.Contrast between foreground and background is non-uniform in the IHS and PCA UV1 HRMS results (more visible in IHS).HPF and AWT results are not as sharp as other UV 1 HRMS results.
Image fusion results for the HRMS Vis 5 band show considerable improvement in high frequency detail with less blockiness (present in original images at same pixel scale).Spatial resolution improvement is clearly more prevalent for the IHS, PCA, BT and UNB algorithms; HPF and AWT results are differentthere is overall weaker contrast and small gains in sharpness.The very small craters in Ina-D and rocky floor of South Ray is better shown by the IHS, PCA, BT and UNB algorithms.
Band ratio (321/415 nm) images derived from the HRMS products show improvements in high frequency detail.HPF and AWT results exhibit the sharpest results (determined by examination of ratio images before and after image fusion), followed by PCA, IHS and finally by BT and UNB.However, IHS and PCA results show spectral distortion -local patches of background to foreground contrast differences uncorrelated with morphology.Further, the morphological outline fades and merges into the background in the south-west corner of Ina-D.A crater at this southwest corner cannot be identified in the IHS HRMS band ratio.Structure of ejecta rays are expected to be enhanced by contrast difference in ratio images -this effect is clearly observed for the HPF and AWT algorithm results for South Ray; Other algorithms, have less clarity for the ray structures.The ratio images for BT and UNB are very similar visually -for these two algorithms where Pan and Φ k is identical for all bands and these terms get canceled when a band ratio is computed.
Composite (false-color) images reveal transformation dependent spectral distortion at Ina-D: (a) morphologically uncorrelated color patches can be seen in IHS (blue patches) and PCA (less pronounced than IHS), and (b) there is smearing of colored pixels (north-east quadrant, Ina-D).Similar distortion effects are not prominent for the South Ray HRMS color composite images.Note that the composite image color in HPF and AWT results are slightly different from the other algorithms for both Ina-D and South Ray targets.Further, larger spatial resolution enhancement is obtained for IHS, PCA, BT and UNB algorithms for composite images.Spectral angle values were computed at each pixel for the five visible bands of LROC WAC images and the result (Figure 3) shows

CONCLUSION
The aligned field-of-view of LROC NAC and WAC and the relative spectral response of their filters (NAC response spans that of the five WAC visible filters) makes these instruments nearly optimal for spatial resolution enhancement via image fusion.All six image fusion algorithms from three classes of pixel-level image fusion methods -component substitution (IHS, PCA), modulation (BT, HPF, UNB) and multi-resolution analysis (AWT) were successful in enhancing spatial resolution of LROC WAC MS bands.
An image fusion method that successfully improves the spatial resolution of spectral data is enabling in the context of lunar science.We discovered that for spectral enhancement via bandratio (a common practice in planetary science / remote sensing), HRMS results from multiplicative modulation-based image fusion methods cannot be used due to an implicit cancellation of the normalized PAN (same for each HRMS band) from either HRMS band.For example, the UNB method produces high spatial resolution products with comparatively small magnitudes of spectral distortion.However, UNB (the form implemented here) is modulation (multiplicative) based -usable UNB implementations must be able to generate reliable HRMS band-to-band ratio images by tweaking the LRPI synthesis procedure.
Similar to image fusion applied to Earth-based remote sensing, spatial resolution enhancement was achieved for LROC WAC images (e.g.IHS, PCA).The best spatially performing algorithms were IHS, PCA, UNB and BT, while the spectral performance of wavelets was found to be the best.A compromise between spatial and spectral performance for generating HRMS lunar images is exciting -implementation and performance analysis of hybrid algorithms (e.g.wavelets and IHS) schemes may further improve our initial success.
HRMS results presented here indicate future possibilities of an optimal (spectrally correct and spatially resolved within accepted tolerances) image fusion scheme for using LROC lunar images for other key science targets.
Modulation schemes form the second class of image fusion methods where sharpness details are modulated (via spatial multiplication or direct addition of high-frequency information) into the MS images.Typically a synthetic, low-resolution Pan image (LRPI) is used to normalize the original Pan and then the normalized Pan is spatially multiplied to the MS image or the difference of the Pan and LRPI is added to the MS image.The Brovey Transform

Figure 3 :
Figure 3: Spectral Angle Mapper (SAM) images for (A) Ina-D and (B) South Ray.Pixel value is the spectral angle between WAC visible bands before and after image fusion.Contrast limit maximum saturates top 2% of Ina-D PCA SAM pixel values.

Figure 4 :
Figure 4: Image fusion results for LROC WAC Vis 5 band

Figure 6 :
Figure 6: Image fusion results' ratio images LROC WAC UV 1 / Vis 1 band ratio.Contrast stretch limits saturate upper and lower 2% of pixel values of individual composite images.

Table 2 :
Quantitative image fusion performance metrics

Table 6 :
South Ray: Correlation Coefficient (MS,DHRMS) ) and this effect is assessed via the CC and UIQI metrics.For both CC and UIQI, the Vis bands perform better than UV bands.Note that the UIQI reported for Tables 7,8 is with respect to the DHRMS, and high values indicate that a particular fusion algorithm better satisfies the necessary condition of image fusion (see section 4).Values of CC and UIQI show more variation for Ina-D and show that AWT and HPF generate the best spectrally correct results.