SAR and Infrared Image Fusion in Complex Contourlet Domain Based on Joint Sparse Representation(in English)
Wu Yiquan①,②,③,④,⑤,⑥* Wang Zhilai①
①(College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China) ②(Jiangsu Key Laboratory of Big Data Analysis Technology, Nanjing University of Information Science & Technology, Nanjing 210044, China) ③(Zhejiang Province Key Laboratory for Signal Processing, Zhejiang University of Technology, Hangzhou 310023, China) ④(Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin 541004, China) ⑤(Key Laboratory of Geo-Spatial Information Technology, Ministry of Land and Resources, Chengdu University of Technology, Chengdu 610059, China) ⑥(MLR Key Laboratory of Metallogeny and Mineral Assessment Institute of Mineral Resources, Chinese Academy of Geological Sciences, Beijing 100037, China)
Abstract To investigate the problems of the large grayscale difference between infrared and Synthetic Aperture Radar (SAR) images and their fusion image not being fit for human visual perception, we propose a fusion method for SAR and infrared images in the complex contourlet domain based on joint sparse representation. First, we perform complex contourlet decomposition of the infrared and SAR images. Then, we employ the K-Singular Value Decomposition (K-SVD) method to obtain an over-complete dictionary of the low-frequency components of the two source images. Using a joint sparse representation model, we then generate a joint dictionary. We obtain the sparse representation coefficients of the low-frequency components of the source images in the joint dictionary by the Orthogonal Matching Pursuit (OMP) method and select them using the selection maximization strategy. We then reconstruct these components to obtain the fused low-frequency components and fuse the high-frequency components using two criteria——the coefficient of visual sensitivity and the degree of energy matching. Finally, we obtain the fusion image by the inverse complex contourlet transform. Compared with the three classical fusion methods and recently presented fusion methods, e.g., that based on the Non-Subsampled Contourlet Transform (NSCT) and another based on sparse representation, the method we propose in this paper can effectively highlight the salient features of the two source images and inherit their information to the greatest extent.
Fund: The National Natural Science Foundation of China (61573183), The Open Fund of Jiangsu Key Laboratory of Big Data Analysis Technology (KXK1403), The Open Fund of Zhejiang Province Key Laboratory for Signal Processing (ZJKL_6_SP-OP 2014-02), The Open Fund of Guangxi Key Lab of Multi-Source Information Mining and Security (MIMS14-01), The Open Fund of Key Laboratory of Geo-Spatial Information Technology (KLGSIT2015-05), The Open Fund of MLR Key Laboratory of Metallogeny and Mineral Assessment Institute of Mineral Resources (ZS1406)
Cite this article:
Wu Yiquan,Wang Zhilai. SAR and Infrared Image Fusion in Complex Contourlet Domain Based on Joint Sparse Representation(in English)[J]. JOURNAL OF RADARS, 2017, 6(4): 349-358.