DocumentCode
449919
Title
Patterns of Multimodal Input Usage in Non-Visual Information Navigation
Author
Chen, Xiaoyu ; Tremaine, Marilyn
Author_Institution
New Jersey Institute of Technology
Volume
6
fYear
2006
fDate
04-07 Jan. 2006
Abstract
Multimodal input is known to be advantageous for graphical user interfaces, but its benefits for non-visual interaction are unknown. To explore this issue, an exploratory study was conducted with fourteen sighted subjects on a system that allows speech input and hand input on a touchpad. Findings include: (1) Users chose between these two input modalities based on the types of operations undertaken. Navigation operations were done primarily with touchpad input, while non-navigation instructions were carried out primarily using speech input. (2) Multimodal error correction was not prevalent. Repeating a failed operation until it succeeded and trying other methods in the same input modality were dominant error-correction strategies. (3) The modality learned first was not necessarily the primary modality used later, but a training order effect existed. These empirical results provide guidelines for designing non-visual multimodal input and create a comparison baseline for a subsequent study with blind users.
Keywords
Error correction; Fingers; Graphical user interfaces; Guidelines; Haptic interfaces; Humans; Information systems; Navigation; Shape; Speech recognition;
fLanguage
English
Publisher
ieee
Conference_Titel
System Sciences, 2006. HICSS '06. Proceedings of the 39th Annual Hawaii International Conference on
ISSN
1530-1605
Print_ISBN
0-7695-2507-5
Type
conf
DOI
10.1109/HICSS.2006.377
Filename
1579539
Link To Document