Title :
vCocktail: Multiplexed-voice Menu Presentation Method for Wearable Computers
Author :
Ikei, Yasushi ; Yamazaki, Hitoshi ; Hirota, Koichi ; Hirose, Michitaka
Author_Institution :
Tokyo Metropolitan University
Abstract :
In this paper, we describe a novel voice menu presentation method, the vCocktail, designed for efficient human-computer interaction in wearable computing. This method is devised to reduce the length of the serial presentation of voice menus by introducing spatiotemporal multiplexed voices with enhanced separation cues. Perception error in judging voice direction was measured first to determine appropriate directions in which and interval angles at which menu items were placed allowing the user a clear distinction among the multiple items. Then voice menu items were presented under spatiotemporally multiplexed conditions with several different settings of spatial localization, number of words, and onset interval. The results of the experiments showed that the subjects could hear items very accurately with localization cues and appropriate onset intervals. In addition, the proposed attenuating menu voice and crosstype spatial sequence of presentation increased the correct answer ratio effectively improving distinction between menu items. A correct answer ratio of 99.7 % was achieved in the case of four-item multiplexing when an attenuating voice and a 0.2 sec onset interval were used with the cross-type spatial sequence.
Keywords :
Aural menu interface; Onset interval; Sound localization; Spatiotemporal multiplication; Wearable computing; Chromium; Computer interfaces; Design methodology; High performance computing; Information processing; Space technology; Spatiotemporal phenomena; Speech recognition; Speech synthesis; Wearable computers; Aural menu interface; Onset interval; Sound localization; Spatiotemporal multiplication; Wearable computing;
Conference_Titel :
Virtual Reality Conference, 2006
Print_ISBN :
1-4244-0224-7
DOI :
10.1109/VR.2006.141