SAR and Infrared Image Fusion in Complex Contourlet Domain Based on Joint Sparse Representation(in English)
Wu Yiquan①,②,③,④,⑤,⑥* Wang Zhilai①
①(College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China) ②(Jiangsu Key Laboratory of Big Data Analysis Technology, Nanjing University of Information Science & Technology, Nanjing 210044, China) ③(Zhejiang Province Key Laboratory for Signal Processing, Zhejiang University of Technology, Hangzhou 310023, China) ④(Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin 541004, China) ⑤(Key Laboratory of Geo-Spatial Information Technology, Ministry of Land and Resources, Chengdu University of Technology, Chengdu 610059, China) ⑥(MLR Key Laboratory of Metallogeny and Mineral Assessment Institute of Mineral Resources, Chinese Academy of Geological Sciences, Beijing 100037, China)
Abstract:To investigate the problems of the large grayscale difference between infrared and Synthetic Aperture Radar (SAR) images and their fusion image not being fit for human visual perception, we propose a fusion method for SAR and infrared images in the complex contourlet domain based on joint sparse representation. First, we perform complex contourlet decomposition of the infrared and SAR images. Then, we employ the K-Singular Value Decomposition (K-SVD) method to obtain an over-complete dictionary of the low-frequency components of the two source images. Using a joint sparse representation model, we then generate a joint dictionary. We obtain the sparse representation coefficients of the low-frequency components of the source images in the joint dictionary by the Orthogonal Matching Pursuit (OMP) method and select them using the selection maximization strategy. We then reconstruct these components to obtain the fused low-frequency components and fuse the high-frequency components using two criteria——the coefficient of visual sensitivity and the degree of energy matching. Finally, we obtain the fusion image by the inverse complex contourlet transform. Compared with the three classical fusion methods and recently presented fusion methods, e.g., that based on the Non-Subsampled Contourlet Transform (NSCT) and another based on sparse representation, the method we propose in this paper can effectively highlight the salient features of the two source images and inherit their information to the greatest extent.
基金资助:The National Natural Science Foundation of China (61573183), The Open Fund of Jiangsu Key Laboratory of Big Data Analysis Technology (KXK1403), The Open Fund of Zhejiang Province Key Laboratory for Signal Processing (ZJKL_6_SP-OP 2014-02), The Open Fund of Guangxi Key Lab of Multi-Source Information Mining and Security (MIMS14-01), The Open Fund of Key Laboratory of Geo-Spatial Information Technology (KLGSIT2015-05), The Open Fund of MLR Key Laboratory of Metallogeny and Mineral Assessment Institute of Mineral Resources (ZS1406)
作者简介: Wu Yiquan (1963-),male,professor,Ph.D.supervisor,Ph.D.degree.He received the doctorate from Nanjing University of Aeronautics and Astronautics in 1998.He is now a professor,Ph.D.supervisor of Nanjing University of Aeronautics and Astronautics.His current research interests include remote sensing image processing and understanding,target detection and identification,visual detection and image measurement,video processing and intelligence analysis,etc.He has published more than 280 papers at home and abroad academic journals.E-mail:nuaaimage@163.com;Wang Zhilai (1992-),male,born in Jiangxi province.He is a graduate student with department of information and communication engineering at college of electronic and information engineering in Nanjing University of Aeronautics and Astronautics.His research interest includes remote sensing image processing and machine vision,etc.E-mail:1610156025@qq.com
引用本文:
吴一全, 王志来. 基于联合稀疏表示的复Contourlet域SAR图像与红外图像融合(英文)[J]. 雷达学报, 2017, 6(4): 349-358.
Wu Yiquan, Wang Zhilai. SAR and Infrared Image Fusion in Complex Contourlet Domain Based on Joint Sparse Representation(in English). JOURNAL OF RADARS, 2017, 6(4): 349-358.
Chen Lei, Yang Feng-bao, Wang Zhi-she, et al.. Mixed fusion algorithm of SAR and visible images with feature level and pixel[J]. Opto-Electronic Engineering, 2014, 41(3): 55-60.
[2]
Zeng Xian-wei, Fang Yang-wang, Wu You-li, et al.. A new guidance law based on information fusion and optimal control of structure stochastic jump system[C]. Proceedings of 2007 IEEE International Conference on Automation and Logistics, Jinan, China, 2007: 624-627.
[3]
Ye Chun-qi, Wang Bao-shu, and Miao Qi-guang. Fusion algorithm of SAR and panchromatic images based on region segmentation in NSCT domain[J]. Systems Engineering and Electronics, 2010, 32(3): 609-613.
[4]
Xu Xing, Li Ying, Sun Jin-qiu, et al.. An algorithm for image fusion based on curvelet transform[J]. Journal of Northwestern Polytechnical University, 2008, 26(3): 395-398.
[5]
Shi Zhi, Zhang Zhuo, and Yue Yan-gang. Adaptive image fusion algorithm based on shearlet transform[J]. Acta Photonica Sinica, 2013, 42(1): 115-120. DOI:10.3788/gzxb
[6]
Liu Jian, Lei Ying-jie, Xing Ya-qiong, et al.. Fusion technique for SAR and gray visible image based on hidden Markov model in non-subsample shearlet transform domain[J]. Control and Decision, 2016, 31(3): 453-457.
[7]
Chen Di-peng and Li Qi. The use of complex contourlet transform on fusion scheme[C]. Proceedings of World Academy of Science, Engineering and Technology, Prague, Czech Republic, 2005: 342-347.
[8]
Wu Yi-quan, Wan Hong, and Ye Zhi-long. Fabric defect image noise reduction based on complex contourlet transform and anisotropic diffusion[J]. CAAI Transactions on Intelligent Systems, 2013, 8(3): 214-219.
[9]
Wei Qi, Bioucas-Dias J, Dobigeon N, et al.. Hyperspectral and multispectral image fusion based on a sparse representation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(7): 3658-3668. DOI:10.1109/TGRS.2014.2381272
[10]
Yu Nan-nan, Qiu Tian-shuang, Bi Feng, et al.. Image features extraction and fusion based on joint sparse representation[J]. IEEE Journal of Selected Topics in Signal Processing, 2011, 5(5): 1074-1082. DOI:10.1109/JSTSP.2011.2112332
[11]
Wang Jun, Peng Jin-ye, Feng Xiao-yi, et al.. Image fusion with nonsubsampled contourlet transform and sparse representation[J]. Journal of Electronic Imaging, 2013, 22(4): 043019. DOI:10.1117/1.JEI.22.4.043019
[12]
Duarte M F, Sarvotham S, Baron D, et al.. Distributed compressed sensing of jointly sparse signals[C]. Proceedings of Conference Record of the Thirty-Ninth Asilomar Conference on Signals, Systems and Computers Asilomar, Pacific Grove, CA, USA, 2005: 1537-1541.
[13]
Aharon M, Elad M, and Bruckstein A. rmK-SVD: An algorithm for designing overcomplete dictionaries for sparse representation[J]. IEEE Transactions on Signal Processing, 2006, 54(11): 4311-4322. DOI:10.1109/TSP.2006.881199
[14]
Mallat S G and Zhang Zhi-feng. Matching pursuits with time-frequency dictionaries[J]. IEEE Transactions on Signal Processing, 1993, 41(12): 3397-3415. DOI:10.1109/78.258082
[15]
Kong Wei-wei and Lei Ying-jie. Technique for image fusion based on NSST domain and human visual characteristics[J]. Journal of Harbin Engineering University, 2013, 34(6): 777-782.
[16]
Fan Xin-nan, Zhang Ji, Li Min, et al.. A multi-sensor image fusion algorithm based on local feature difference[J]. Journal of Optoelectronics·Laser, 2014, 25(10): 2025-2032.