INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP) framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.


INTRODUCTION
In order to get more information, image fusion techniques are often used to integrate the complementary information among different remote sensing images.By far, a great number of fusion methods for remote sensing images have been developed (Luo et al., 2002;Pohl and van Genderen, 1998).Classical remote sensing image fusion techniques include panchromatic(PAN) / multi-spectral(MS) fusion (Joshi and Jalobeanu, 2010;Li and Leung, 2009), MS / hyper-spectral(HS) fusion (Eismann and Hardie, 2005) and multi-temporal (MT) fusion (Shen et al., 2009) etc.However, most fusion methods were developed to fuse images from two sensors, and little work attempted to solve the fuse problem of more sensors.In this paper, we propose an integrated fusion method for multiple temporal-spatial-spectral scales of remote sensing images.This method is based on the maximum a posteriori (MAP) framework, which has the performance to fuse images from arbitrary number of optical sensors.

IMAGE OBSERVATION MODELS
The image observation models relate the desired image to the observed images.Let

D
with , , The second image observation model relates the desired image x to the observed image z .Generally the band of z is wider than that of x .It has been proved that a wide-band image is almost a linear combination of several narrow-band images when the wide band approximately covers the narrow bands (Boggione et al., 2003;Li and Leung, 2009;Vega et al., 2009).Thus, if the spatial resolutions of x and z are same, the spectral combination model can be denoted as In more general case, the model can be rewritten as Simplifying this equation by multiplying corresponding matrices and vectors , , ,  , ,  , ,  , ,  k

THE FUSION METHOD
The proposed method is based on the maximum a posteriori (MAP) framework.For the MAP model, given the images y and z , the desired image can be estimated as: Applying Bayes' rule, equation ( 7 Since ( , ) p y z is independent of x , it can be considered a constant and removed from the maximum function: / 2 where , ,  (Schultz and Stevenson, 1996;Shen and Zhang, 2009) is employed for density function ( ) x i j , and the following finite second-order differences are computed in two adjacent cliques for every location ( , ) i j in the image.
In ( 13), ( ) ρ ⋅ is the Huber function defined as: where μ is a threshold parameter separating the quadratic and linear regions.When μ approaches +∞ , the prior becomes the Gauss-Markov, which has similar spatial constraints to the Laplacian prior.Substituting ( 10)-( 14) in ( 9) and implementing the monotonic logarithm function, after some manipulation, N can be safely dropped, and the maximization of this posterior probability distribution is equivalent to the following regularized minimum problem: ˆarg min ( ) , ,  z, , , ,  , ,  ,  1 1 1 , In this paper, we assume , , α are invariable.
Thus, minimum function can be simplified as ( ) where 1 λ and 1 λ are two regularization parameters.At last, the steepest gradient descent method (Shen and Zhang, 2009;Shen et al., 2010) is employed to solve the fusion images.

EXPERIMENTAL RESULTS
The proposed method was tested using simulated images, and the experimental images and results are illustrated in Fig. 1.We used one HS image to simulate one PAN image, one MS image (four bands) and four degraded HS images.1.It is seen that the integrated fusion method obtains the best evaluation values in terms of all the indices.This verifies the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images.
Table 1.Evaluation of the fusion results

CONCLUSIONS
This paper presents a fusion method for multiple temporalspatial-spectral images based on the maximum a posteriori framework.Simulated experiments validated that the proposed method has good performance in terms of both visual inspection and quantitative evaluation.Future works would be carried out to test the proposed method using real remote sensing images.

MT fusion MS/HS fusion PAN/HS fusion
total band number.Generally, the band numbers of the observed images are less than or equal to x B .Here we use y to denote the images whose band number is equal to x B and use z to denote the images whose band number is less than x B .Thus, the th b band of the th k image of y can be denoted as , noise vector.For convenience, equation (1) can be rewritten as (2) by substituting the product of matrices , , z and y are both known quantities, so it is tenable for p y x provides a measure of the conformance of the estimated image x to the observed image y according to the observation model (2).Assuming that the noise is zero-mean Gaussian noise, and each image is independent , edge-preserving Huber-Markov image model the model parameter of the th b band, ξ is a local group of pixels called a clique, and ψ is the set of all the cliques, Fig.1 Simulated experiment of different fusion methods.
25) Here, ˆb x and b x represent the th b bands of the fused image and original image, ˆb b σ x x is the covariance between ˆb x and b x , ˆb m x and b m x their means, and ˆb σ x and b σ x their standard deviations.The ideal values of the RMSE, CC, UIQI, ERGAS and SA are, respectively, 0, 1, 1, 0 and 0. The evaluation results are shown in Table International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B7, 2012 XXII ISPRS Congress, 25 August -01 September 2012, Melbourne, Australia