DocumentCode :
2912863
Title :
Parameter learning with truncated message-passing
Author :
Domke, Justin
Author_Institution :
Rochester Institute of Technology
fYear :
2011
fDate :
20-25 June 2011
Firstpage :
2937
Lastpage :
2943
Abstract :
Training of conditional random fields often takes the form of a double-loop procedure with message-passing inference in the inner loop. This can be very expensive, as the need to solve the inner loop to high accuracy can require many message-passing iterations. This paper seeks to reduce the expense of such training, by redefining the training objective in terms of the approximate marginals obtained after message-passing is “truncated” to a fixed number of iterations. An algorithm is derived to efficiently compute the exact gradient of this objective. On a common pixel labeling benchmark, this procedure improves training speeds by an order of magnitude, and slightly improves inference accuracy if a very small number of message-passing iterations are used at test time.
Keywords :
inference mechanisms; learning (artificial intelligence); message passing; statistical analysis; conditional random fields; double-loop procedure; message-passing inference; message-passing iterations; parameter learning; pixel labeling benchmark; truncated message-passing; Accuracy; Approximation algorithms; Computational modeling; Convergence; Image resolution; Message passing; Training;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on
Conference_Location :
Providence, RI
ISSN :
1063-6919
Print_ISBN :
978-1-4577-0394-2
Type :
conf
DOI :
10.1109/CVPR.2011.5995320
Filename :
5995320
Link To Document :
بازگشت