DocumentCode :
397777
Title :
Visually steerable sound beam forming method possible to track target person by real-time visual face tracking and speaker array
Author :
Shinoda, Kensuke ; Mizoguchi, Hiroshi ; Kagami, Satoshi ; Nagashima, Koichi
Author_Institution :
Dept. of Mech. Eng., Tokyo Univ. of Sci., Noda, Japan
Volume :
3
fYear :
2003
fDate :
5-8 Oct. 2003
Firstpage :
2199
Abstract :
This paper presents a method of visually steerable sound beam forming. The method is a combination of face detection and tracking by motion image processing and sound beam forming by speaker array. Direction towards a target person can be obtained by the face tracking in real-time. By continuously updating the sound beam direction with the result of the face detection and tracking, the system is possible to keep transmitting sounds towards the target person selectively, even if he or she moves around. Experimental results prove the feasibility and effectiveness of the method.
Keywords :
face recognition; image motion analysis; image processing; speaker recognition; face detection; face tracking; motion image processing; real-time visual face tracking; sound beam direction; speaker array; transmitting sounds; visually steerable sound beam forming; Acoustic beams; Acoustic measurements; Art; Face detection; Humans; Image processing; Loudspeakers; Mechanical engineering; Position measurement; Target tracking;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Systems, Man and Cybernetics, 2003. IEEE International Conference on
ISSN :
1062-922X
Print_ISBN :
0-7803-7952-7
Type :
conf
DOI :
10.1109/ICSMC.2003.1244210
Filename :
1244210
Link To Document :
بازگشت