Title :
Unconstrained Multimodal Multi-Label Learning
Author :
Yan Huang ; Wei Wang ; Liang Wang
Author_Institution :
Center for Res. on Intell. Perception & Comput., Inst. of Autom., Beijing, China
Abstract :
Multimodal learning has been mostly studied by assuming that multiple label assignments are independent of each other and all the modalities are available. In this paper, we consider a more general problem where the labels contain dependency relationships and some modalities are likely to be missing. To this end, we propose a multi-label conditional restricted Boltzmann machine (ML-CRBM), which handles modality completion , fusion, and multi-label prediction in a unified framework. The proposed model is able to generate missing modalities based on observed ones, by explicitly modelling and sampling their conditional distributions. After that, it can discriminatively fuse multiple modalities to obtain shared representations under the supervision of class labels. To consider the co-occurrence of the labels, the proposed model formulates the multi-label prediction as a max-margin-based multi-task learning problem. Model parameters can be jointly learned by seeking a balance between being generative for modality generation and being discriminative for label prediction. We perform a series of experiments in terms of classification, visualization, and retrieval, and the experimental results clearly demonstrate the effectiveness of our method.
Keywords :
Boltzmann machines; learning (artificial intelligence); ML-CRBM; conditional distributions; label prediction; max-margin-based multitask learning problem; modality completion; multilabel conditional restricted Boltzmann machine; unconstrained multimodal multilabel learning; Correlation; Data models; Fuses; Learning systems; Neural networks; Predictive models; Semantics; Multi-label learning; multi-task learning; multimodal learning; restricted Boltzmann machine;
Journal_Title :
Multimedia, IEEE Transactions on
DOI :
10.1109/TMM.2015.2476658