DocumentCode
1603633
Title
Learning spatial event models from multiple-camera perspectives
Author
Coen, Michael H. ; Wilson, Kevin W.
Author_Institution
Artificial Intelligence Lab., MIT, Cambridge, MA, USA
Volume
1
fYear
1999
fDate
6/21/1905 12:00:00 AM
Firstpage
149
Abstract
Intelligent, interactive environments promise to drastically change our everyday lives by connecting computation to the ordinary, human-level events happening in the real world. This paper describes a new model for tracking people in a room through a multi-camera vision system that learns to combine event predictions from multiple video streams. The system is intended to locate and track people in the room, determine their postures, and obtain images of their faces and upper bodies suitable for use during teleconferencing. This paper describes the design and architecture of the vision system and its use in Hal, the most recently constructed interactive space in the authors´ Intelligent Room project
Keywords
cameras; computer vision; image sensors; interactive systems; learning (artificial intelligence); teleconferencing; Intelligent Room project; intelligent interactive environments; interactive space; learning spatial event models; multi-camera vision system; multiple video streams; multiple-camera perspectives; people tracking; teleconferencing; Artificial intelligence; Cameras; Computational intelligence; Computer interfaces; Computer vision; Face detection; Human computer interaction; Laboratories; Machine vision; Streaming media;
fLanguage
English
Publisher
ieee
Conference_Titel
Industrial Electronics Society, 1999. IECON '99 Proceedings. The 25th Annual Conference of the IEEE
Conference_Location
San Jose, CA
Print_ISBN
0-7803-5735-3
Type
conf
DOI
10.1109/IECON.1999.822188
Filename
822188
Link To Document