Abstract :
Recent generations of humanoid robots increasingly resemble humans in shape and articulatory capacities. This progress has motivated researchers to design dancing robots that can mimic the complexity and style of human choreographic dancing. Such complicated actions are usually programmed manually and ad hoc. However, this approach is both tedious and inflexible. Researchers at the University of Tokyo have developed the learning-from-observation (LFO) training method to overcome this difficulty.1-2 LFO enables a robot to acquire knowledge of what to do and how to do it from observing human demonstrations. Direct mapping from human joint angles to robot joint angles doesn´t work well because of the dynamic and kinematic differences between the observed person and the robot (for example, weight, balance, and arm and leg lengths). LFO therefore relies on predesigned task models, which represent only the actions (and features thereof) that are essential to mimicry. Then it adapts these actions to the robot´s morphology and dynamics so that it can mimic the movement. This indirect, two-step mapping is crucial for robust imitation and performance.
Keywords :
humanoid robots; learning (artificial intelligence); mobile robots; robot dynamics; robot kinematics; artificial intelligence; dancing robots; human choreographic dancing; humanoid robots; learning-from-observation training method; robot dynamic; robot joint angle mapping; robot kinematics; Dance Partner Robot; Keepon; autonomous behavior; chaotic itinerancy; humanoid robots; intermodal mapping; neural mapping; rhythmic intelligence; robotics; situated knowledge; social intelligence; symbol grounding; synesthesia; task models;