DocumentCode :
2830223
Title :
Learning task structure from video examples for workflow tracking and authoring
Author :
Petersen, Nils ; Stricker, Didier
Author_Institution :
DFKI GmbH, Germany
fYear :
2012
fDate :
5-8 Nov. 2012
Firstpage :
237
Lastpage :
246
Abstract :
We present a robust real-time capable and simple framework for segmenting video sequences and live-streams of manual workflows into the comprising single tasks. Using classifiers trained on these segments we can follow a user that is performing the workflow in real-time as well as learn task variants from additional video examples. Our proposed method neither requires object detection nor high-level features. Instead we propose a novel measure derived from image distance that evaluates image properties jointly without prior segmentation. Our method can cope with repetitive and free-hand activities and the results are in many cases comparable or equal to manual task segmentation. One important application of our method is the automatic creation of a step-by-step task documentation from a video demonstration. The entire process to automatically create a fully functional augmented reality manual will be explained in detail and results are shown.
Keywords :
augmented reality; image classification; image segmentation; image sequences; video signal processing; augmented reality; authoring; classifier; image distance; image properties; learning task structure; live-streams; manual task segmentation; task documentation; task variant; video demonstration; video example; video segmentation; video sequence; workflow tracking; Augmented reality; Current measurement; Image segmentation; Manuals; Motion segmentation; Robustness; Training;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Mixed and Augmented Reality (ISMAR), 2012 IEEE International Symposium on
Conference_Location :
Atlanta, GA
Print_ISBN :
978-1-4673-4660-3
Electronic_ISBN :
978-1-4673-4661-0
Type :
conf
DOI :
10.1109/ISMAR.2012.6402562
Filename :
6402562
Link To Document :
بازگشت