DocumentCode :
2212105
Title :
Mutually constrained multimodal mapping for simultaneous development: Modeling vocal imitation and lexicon acquisition
Author :
Sasamoto, Yuki ; Yoshikawa, Yuichiro ; Asada, Minoru
Author_Institution :
ERATO, Asada Synergistic Intell. Project, JST, Suita, Japan
fYear :
2010
fDate :
18-21 Aug. 2010
Firstpage :
291
Lastpage :
296
Abstract :
This paper presents a method of simultaneous development of vocal imitation and lexicon acquisition with a mutually constrained multimodal mapping. A caregiver is basically assumed to give matched pairs for mappings, for example by imitating the learner´s voice or labelling an object that it is looking at. However, the tendency cannot be always expected to be reliable. Subjective consistency is introduced to judge whether to believe the observed experiences (external input) as reliable signal for learning. It estimates the value of one layer by combining the values from other layers and external input. Based on the proposed method, a simulated infant robot learns mappings among the representations of its caregiver´s phonemes, those of its own phonemes, and those of objects. The proposed mechanism enables correct mappings even when caregivers do not always give correct examples, as real caregivers do not for their infants.
Keywords :
human-robot interaction; intelligent robots; natural language processing; paediatrics; robot vision; speech processing; caregiver; infants; lexicon acquisition; mutually constrained multimodal mapping; object labelling; phonemes; simulated infant robot; simultaneous development; vocal imitation; Conferences; Correlation; Labeling; Pediatrics; Predictive models; Probability distribution; Robots;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Development and Learning (ICDL), 2010 IEEE 9th International Conference on
Conference_Location :
Ann Arbor, MI
Print_ISBN :
978-1-4244-6900-0
Type :
conf
DOI :
10.1109/DEVLRN.2010.5578829
Filename :
5578829
Link To Document :
بازگشت