DocumentCode :
1313695
Title :
User-centered modeling for spoken language and multimodal interfaces
Author :
Oviatt, Sharon
Author_Institution :
Dept. of Comput. Sci., Oregon Graduate Inst. of Sci. & Technol., Beaverton, OR, USA
Volume :
3
Issue :
4
fYear :
1996
Firstpage :
26
Lastpage :
35
Abstract :
By modeling difficult sources of linguistic variability in speech and language, we can design interfaces that transparently guide human input to match system processing capabilities. Such work will yield more user centered and robust interfaces for next generation spoken language and multimodal systems
Keywords :
human factors; interactive systems; multimedia computing; natural language interfaces; natural languages; speech processing; human input; linguistic variability; multimodal interfaces; multimodal systems; natural language processing; next generation spoken language; robust interfaces; spoken language; system processing capabilities; user centered modeling; Automatic control; Face recognition; Humans; Impedance matching; Mobile communication; Natural languages; Robustness; Speech processing; Speech recognition; Timing;
fLanguage :
English
Journal_Title :
MultiMedia, IEEE
Publisher :
ieee
ISSN :
1070-986X
Type :
jour
DOI :
10.1109/93.556458
Filename :
556458
Link To Document :
بازگشت