DocumentCode :
81935
Title :
The Group Latent Variable Approach to Probit Binary Classifications
Author :
Yun Seog Yeun ; Nam-Joon Kim
Author_Institution :
Dept. of Comput.-Aided Mech. Design Eng., Daejin Univ., Gyeonggido, South Korea
Volume :
25
Issue :
7
fYear :
2014
fDate :
Jul-14
Firstpage :
1277
Lastpage :
1286
Abstract :
This paper considers the binary classification with the probit model under the expectation-maximization (EM) algorithm. Usually, in the Bayesian approach of the probit model, the latent variables are introduced to handle with the intractable problem. For each training sample, there is a corresponding latent variable. However, the EM algorithm requires matrix inversions which demand for the expansive computational cost when the number of training samples is large. To overcome this problem, we employ the group latent-variable approach where for each of the training samples there are corresponding multiple latent variables instead of just one. The major advantage of this approach, which is originated from Bayesian backfitting, is that there are no requirements for matrix inversions in the EM algorithm for the probit model. In this paper, to obtain sparse classifiers the Laplacian prior is employed and the method to control the degree of sparseness is presented. Although the sparsity of the classifier is not determined in a full automatic way, it can be controlled by specifying just one parameter. In other words, we are free to choose the degree of sparseness. The proposed method is compared with support vector machine, relevance vector machine, and generalized LASSO.
Keywords :
belief networks; expectation-maximisation algorithm; pattern classification; Bayesian backfitting; EM algorithm; Laplacian prior; expectation-maximization algorithm; generalized LASSO; group latent variable approach; multiple latent variables; probit binary classifications; probit model; relevance vector machine; sparse classifiers; sparseness degree control; support vector machine; Bayes methods; Kernel; Laplace equations; Manganese; Support vector machines; Training; Vectors; Bayesian backfitting; Laplacian prior; binary classification; expectation--maximization (EM) algorithm; expectation??maximization (EM) algorithm; latent variable; probit model; sparseness; sparseness.;
fLanguage :
English
Journal_Title :
Neural Networks and Learning Systems, IEEE Transactions on
Publisher :
ieee
ISSN :
2162-237X
Type :
jour
DOI :
10.1109/TNNLS.2013.2285784
Filename :
6656000
Link To Document :
بازگشت