DocumentCode :
3274672
Title :
Multimodal image fusion via sparse representation with local patch dictionaries
Author :
Minjae Kim ; Han, David K. ; Hanseok Ko
Author_Institution :
Sch. of Electr. Eng., Korea Univ., Seoul, South Korea
fYear :
2013
fDate :
15-18 Sept. 2013
Firstpage :
1301
Lastpage :
1305
Abstract :
Sparse representation is a promising technique for the field of image processing and pattern recognition. It generally exploits over-complete dictionaries which is fixed and known in advance, or learned using training algorithm such as K-SVD. In this paper, we propose a new multimodal image fusion approach based on the sparsity model with local patch dictionaries generated directly from input images. For every location in the image, dictionary is simply constructed with neighboring patches. Experimental results show that the proposed method is efficient and competitive with some existing image fusion methods.
Keywords :
compressed sensing; image fusion; image representation; learning (artificial intelligence); singular value decomposition; K-SVD; image processing; input images; local patch dictionaries; multimodal image fusion approach; neighboring patches; overcomplete dictionaries; pattern recognition; sparse representation; sparsity model; training algorithm; Dictionaries; Image fusion; Matching pursuit algorithms; Noise reduction; Sensors; Transforms; Vectors; Dictionary learning; Image fusion; K-SVD; Non-local means denoising; Sparse representation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Image Processing (ICIP), 2013 20th IEEE International Conference on
Conference_Location :
Melbourne, VIC
Type :
conf
DOI :
10.1109/ICIP.2013.6738268
Filename :
6738268
Link To Document :
بازگشت