Author_Institution :
Northeastern Univ., Boston, MA, USA
Abstract :
The usage of specifically designed cameras for robust image classification, biometric and surveillance has emerged recently and it inevitably involves multi-modality classification problems. Cross-modality as well as within- and between-class variations jointly produce a significantly complex problem. In this paper, we propose a hierarchical hyperlingual-words based approach towards the aforementioned problems. First, a novel structure, hyperlingual-words, is created to capture the high-level semantic features across different modalities and within each modality. Second, considering the impact of different resolutions of histograms, we utilize pyramid histogram match for hierarchical hyperlingual-words to weight the ChiSquare metric, and obtain a more discriminative one. Finally, extensive experiments are conducted on two data sets, namely, BUAA-VisNir Face Database and Oulu-CASIA NIR&VIS Database, and results show that our method is superior to the state-of-the-art on cross-modality face recognition with pose&expression variations.
Keywords :
face recognition; feature extraction; image classification; image matching; statistical analysis; visual databases; BUAA-VisNir face database; Oulu-CASIA NIR and VIS database; between-class variation; chisquare metric; cross-modality variation; expression variation; hierarchical hyperlingual-words based approach; high-level semantic feature; multimodality face classification; pose variation; pyramid histogram match; robust image classification; within-class variation; Databases; Face; Face recognition; Histograms; Laplace equations; Measurement; Training;