DocumentCode :
1341041
Title :
Relevance Units Latent Variable Model and Nonlinear Dimensionality Reduction
Author :
Gao, Junbin ; Zhang, Jun ; Tien, David
Author_Institution :
Sch. of Comput. & Math., Charles Sturt Univ., Bathurst, NSW, Australia
Volume :
21
Issue :
1
fYear :
2010
Firstpage :
123
Lastpage :
135
Abstract :
A new dimensionality reduction method, called relevance units latent variable model (RULVM), is proposed in this paper. RULVM has a close link with the framework of Gaussian process latent variable model (GPLVM) and it originates from a recently developed sparse kernel model called relevance units machine (RUM). RUM follows the idea of relevance vector machine (RVM) under the Bayesian framework but releases the constraint that relevance vectors (RVs) have to be selected from the input vectors. RUM treats relevance units (RUs) as part of the parameters to be learned from the data. As a result, a RUM maintains all the advantages of RVM and offers superior sparsity. RULVM inherits the advantages of sparseness offered by the RUM and the experimental result shows that RULVM algorithm possesses considerable computational advantages over GPLVM algorithm.
Keywords :
Gaussian processes; belief networks; learning (artificial intelligence); Bayesian framework; Gaussian process latent variable model; nonlinear dimensionality reduction method; relevance units latent variable model; relevance units machine; relevance vector machine; sparse kernel model; Dimensionality reduction; gaussian process latent variable model (GPLVM); relevance units machines (RUM); relevance vector machine (RVM); Algorithms; Artificial Intelligence; Computer Simulation; Handwriting; Humans; Nonlinear Dynamics; Pattern Recognition, Automated; Signal Processing, Computer-Assisted; Speech; Time Factors;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/TNN.2009.2034964
Filename :
5340597
Link To Document :
بازگشت