DocumentCode :
3497253
Title :
Temporal nonlinear dimensionality reduction
Author :
Gashler, Mike ; Martinez, Tony
Author_Institution :
Dept. of Comput. Sci., Brigham Young Univ., Provo, UT, USA
fYear :
2011
fDate :
July 31 2011-Aug. 5 2011
Firstpage :
1959
Lastpage :
1966
Abstract :
Existing Nonlinear dimensionality reduction (NLDR) algorithms make the assumption that distances between observations are uniformly scaled. Unfortunately, with many interesting systems, this assumption does not hold. We present a new technique called Temporal NLDR (TNLDR), which is specifically designed for analyzing the high-dimensional observations obtained from random-walks with dynamical systems that have external controls. It uses the additional information implicit in ordered sequences of observations to compensate for non-uniform scaling in observation space. We demonstrate that TNLDR computes more accurate estimates of intrinsic state than regular NLDR, and we show that accurate estimates of state can be used to train accurate models of dynamical systems.
Keywords :
learning (artificial intelligence); random processes; TNLDR algorithm; dynamical system; nonuniform scaling; random-walks; temporal NLDR; temporal nonlinear dimensionality reduction; Computational modeling; Heuristic algorithms; Optimization; Prediction algorithms; Recurrent neural networks; Robots; Training;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks (IJCNN), The 2011 International Joint Conference on
Conference_Location :
San Jose, CA
ISSN :
2161-4393
Print_ISBN :
978-1-4244-9635-8
Type :
conf
DOI :
10.1109/IJCNN.2011.6033465
Filename :
6033465
Link To Document :
بازگشت