Title :
Temporal nonlinear dimensionality reduction
Author :
Gashler, Mike ; Martinez, Tony
Author_Institution :
Dept. of Comput. Sci., Brigham Young Univ., Provo, UT, USA
fDate :
July 31 2011-Aug. 5 2011
Abstract :
Existing Nonlinear dimensionality reduction (NLDR) algorithms make the assumption that distances between observations are uniformly scaled. Unfortunately, with many interesting systems, this assumption does not hold. We present a new technique called Temporal NLDR (TNLDR), which is specifically designed for analyzing the high-dimensional observations obtained from random-walks with dynamical systems that have external controls. It uses the additional information implicit in ordered sequences of observations to compensate for non-uniform scaling in observation space. We demonstrate that TNLDR computes more accurate estimates of intrinsic state than regular NLDR, and we show that accurate estimates of state can be used to train accurate models of dynamical systems.
Keywords :
learning (artificial intelligence); random processes; TNLDR algorithm; dynamical system; nonuniform scaling; random-walks; temporal NLDR; temporal nonlinear dimensionality reduction; Computational modeling; Heuristic algorithms; Optimization; Prediction algorithms; Recurrent neural networks; Robots; Training;
Conference_Titel :
Neural Networks (IJCNN), The 2011 International Joint Conference on
Conference_Location :
San Jose, CA
Print_ISBN :
978-1-4244-9635-8
DOI :
10.1109/IJCNN.2011.6033465