Title :
Weakly supervised pain localization using multiple instance learning
Author :
Sikka, K. ; Dhall, Abhinav ; Bartlett, Marnie
Author_Institution :
Machine Perception Lab., Univ. of California, San Diego, La Jolla, CA, USA
Abstract :
Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through `concept frames´ to `concept segments´ and argues through extensive experiments that algorithms like MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of our approach by achieving promising results on the problem of pain detection in videos.
Keywords :
face recognition; image representation; image segmentation; learning (artificial intelligence); pattern clustering; video signal processing; AFER research; BoW representation; MS-MIL; UNBC-McMaster shoulder pain dataset; automatic facial expression recognition research; automatic pain recognition; bag of words representation; concept frames; concept segments; multiple clustering; multiple instance learning; multiple segments; multiscale temporal scanning window; no-pain systems; pain expression event; sequence level ground-truth; target expression; temporal duration; temporal dynamics; uncertain temporal location; videos; vital clinical application; weakly labeled data; weakly supervised pain localization; Feature extraction; Hidden Markov models; Pain; Support vector machines; Training; Training data; Videos;
Conference_Titel :
Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on
Conference_Location :
Shanghai
Print_ISBN :
978-1-4673-5545-2
Electronic_ISBN :
978-1-4673-5544-5
DOI :
10.1109/FG.2013.6553762