DocumentCode :
2283387
Title :
A semantic fusion based Multimodal Interaction Panel for virtual city planning
Author :
He, Yiyue ; Geng, Guohua ; Zhou, Mingquan ; Li, Kang ; Du, Zhuoming ; Zhao, Yongmei
Author_Institution :
Coll. of Inf. Sci. & Technol., Northwest Univ., Xi´´an, China
Volume :
4
fYear :
2011
fDate :
10-12 June 2011
Firstpage :
399
Lastpage :
404
Abstract :
Multimodal Interaction (MMI) in Virtual Environment (VE) has become a focus of both Virtual Reality (VR) and Human-Computer Interaction (HCI). In this paper, with reference to existing task-oriented integration algorithms, we propose a new hierarchical multimodal integration model based on semantics extracting with probability and time constraints, and on the background of virtual city planning Multimodal Interaction Panel (MIP) based on multiple metaphors is constructed by introducing daily operations of pen and paper into VE and combining spatial position trackers, microphone and pen-based sketches and gestures. A variety of common and application-oriented MMI techniques are designed and implemented, and part of their integration processes are analyzed in detail. Evaluation shows MIP significantly improved naturality and efficiency of MMI.
Keywords :
human computer interaction; town and country planning; virtual reality; hierarchical multimodal integration panel model; human-computer interaction; semantic fusion; task-oriented integration algorithms; virtual city planning; virtual environment; virtual reality; Context; Merging; Physical layer; Semantics; Syntactics; Three dimensional displays; Urban planning; city planning; multimodal integration; multimodal interaction; multimodal interaction panel; semantics extact;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Science and Automation Engineering (CSAE), 2011 IEEE International Conference on
Conference_Location :
Shanghai
Print_ISBN :
978-1-4244-8727-1
Type :
conf
DOI :
10.1109/CSAE.2011.5952877
Filename :
5952877
Link To Document :
بازگشت