Title :
Neural network learning control of robot manipulators using gradually increasing task difficulty
Author :
Sanger, Terence D.
Author_Institution :
Jet Propulsion Lab., California Inst. of Technol., Pasadena, CA, USA
fDate :
6/1/1994 12:00:00 AM
Abstract :
Trajectory extension learning is an incremental method for training an artificial neural network to approximate the inverse dynamics of a robot manipulator. Training data near a desired trajectory is obtained by slowly varying a parameter of the trajectory from a region of easy solvability of the inverse dynamics toward the desired behavior. The parameter can be average speed, path shape, feedback gain, or any other controllable variable. As learning proceeds, an approximate solution to the local inverse dynamics for each value of the parameter is used to guide learning for the next value of the parameter. Convergence conditions are given for two variations on the algorithm. Examples are shown of application to a real 2-joint direct drive robot arm and a simulated 3-joint redundant arm, both using simulated equilibrium point control
Keywords :
convergence; dynamics; inverse problems; learning (artificial intelligence); manipulators; neural nets; 2-joint direct drive robot arm; average speed; convergence conditions; feedback gain; gradually increasing task difficulty; inverse dynamics; neural network learning control; path shape; robot manipulators; simulated 3-joint redundant arm; simulated equilibrium point control; trajectory extension learning; Artificial neural networks; Biological neural networks; Control systems; Delay; Manipulator dynamics; Neural networks; Robot control; Shape control; State-space methods; Training data;
Journal_Title :
Robotics and Automation, IEEE Transactions on