Title :
Parameter learning with truncated message-passing
Author_Institution :
Rochester Institute of Technology
Abstract :
Training of conditional random fields often takes the form of a double-loop procedure with message-passing inference in the inner loop. This can be very expensive, as the need to solve the inner loop to high accuracy can require many message-passing iterations. This paper seeks to reduce the expense of such training, by redefining the training objective in terms of the approximate marginals obtained after message-passing is “truncated” to a fixed number of iterations. An algorithm is derived to efficiently compute the exact gradient of this objective. On a common pixel labeling benchmark, this procedure improves training speeds by an order of magnitude, and slightly improves inference accuracy if a very small number of message-passing iterations are used at test time.
Keywords :
inference mechanisms; learning (artificial intelligence); message passing; statistical analysis; conditional random fields; double-loop procedure; message-passing inference; message-passing iterations; parameter learning; pixel labeling benchmark; truncated message-passing; Accuracy; Approximation algorithms; Computational modeling; Convergence; Image resolution; Message passing; Training;
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on
Conference_Location :
Providence, RI
Print_ISBN :
978-1-4577-0394-2
DOI :
10.1109/CVPR.2011.5995320