DocumentCode :
1645994
Title :
Integrating sign language into a virtual reality environments
Author :
Prime, Martin
Author_Institution :
Rutherford Appleton Lab., Chilton, UK
fYear :
1995
fDate :
11/15/1995 12:00:00 AM
Firstpage :
42491
Lastpage :
42494
Abstract :
In a multi-user distributed virtual reality environment, such as DIVE (Distributed Interactive Virtual Environment) where the various participants each have a 3-D representation, a model for interaction is necessary. The Spatial Interaction Model describes how objects should interact with each other in the virtual environment. This model has since been extended to cover more interactions and is described in the paper. Language and gesture can be coordinated to form a single communication system more powerful than either alone. Hand gesture in particular is critical for fast interactive human to human communication. A VR system that allows a sign language channel and thus also a gestural channel will have an enriched communication medium. The main concept of this paper is that in a virtual reality environment it is possible to support users with different capabilities for interaction. The factors considered are whether the user can hear or not; their ability to use sign language; and the technology they have at their disposal on entry into the VR environment. A user is thus represented in the virtual environment as an entity with those capabilities available to it
Keywords :
distributed processing; handicapped aids; interactive systems; user interfaces; virtual reality; DIVE; Distributed Interactive Virtual Environment; Spatial Interaction Model; gesture; hand gesture; multi-user distributed virtual reality; sign language; three dimensional representation; virtual reality;
fLanguage :
English
Publisher :
iet
Conference_Titel :
Visualisation of Three-Dimensional Fields, IEE Colloquium on
Conference_Location :
London
Type :
conf
DOI :
10.1049/ic:19951283
Filename :
498895
Link To Document :
بازگشت