Title :
On-Body IE: A Head-Mounted Multimodal Augmented Reality System for Learning and Recalling Faces
Author :
Sonntag, Daniel ; Toyama, T.
Author_Institution :
German Res. Center for Artificial Intell., Saarbruecken, Germany
Abstract :
We present a new augmented reality (AR) system for knowledge-intensive location-based expert work. The multimodal interaction system combines multiple on-body input and output devices: a speech-based dialogue system, a head-mounted augmented reality display (HMD), and a head-mounted eyetracker. The interaction devices have been selected to augment and improve the expert work in a specific medical application context which shows its potential. In the sensitive domain of examining patients in a cancer screening program we try to combine several active user input devices in the most convenient way for both the patient and the doctor. The resulting multimodal AR is an on-body intelligent environment (IE) and has the potential to yield higher performance outcomes and provides a direct data acquisition control mechanism. It leverages the doctor´s capabilities of recalling the specific patient context by a virtual, context-based patient-specific ”external brain” for the doctor which can remember patient faces and adapts the virtual augmentation according to the specific patient observation and finding context. In addition, patient data can be displayed on the HMD-triggered by voice or object/patient recognition. The learned (patient) faces and immovable objects (e.g., a big medical device) define the environmental clues to make the context-dependent recognition model part of the IE to achieve specific goals for the doctors in the hospital routine.
Keywords :
augmented reality; biomedical communication; cancer; data acquisition; health care; helmet mounted displays; human computer interaction; medical computing; medical image processing; object recognition; object tracking; speech recognition; speech-based user interfaces; ubiquitous computing; HMD; cancer screening program; context-based patient-specific external brain; context-dependent recognition model; direct data acquisition control mechanism; environmental clues; head-mounted eyetracker; head-mounted multimodal augmented reality system; hospital routine; knowledge-intensive location-based expert work; learning faces; medical application context; medical healthcare; multimodal interaction system; multiple on-body input devices; multiple on-body output devices; object recognition; on-body intelligent environment; patient observation; patient recognition; recalling faces; speech-based dialogue system; ubiquitous computing; voice recognition; Calibration; Context; Face recognition; Medical services; Mobile communication; Speech; Speech recognition; Augmented Reality; Input Devices and Strategies; Medical Healthcare; Realtime Interaction;
Conference_Titel :
Intelligent Environments (IE), 2013 9th International Conference on
Conference_Location :
Athens