Abstract :
The majorize-minimize (MM) optimization technique has received considerable attention in signal and image processing applications, as well as in statistics literature. At each iteration of an MM algorithm, one constructs a tangent majorant function that majorizes the given cost function and is equal to it at the current iterate. The next iterate is obtained by minimizing this tangent majorant function, resulting in a sequence of iterates that reduces the cost function monotonically. A well-known special case of MM methods are expectation-maximization algorithms. In this paper, we expand on previous analyses of MM, due to Fessler and Hero, that allowed the tangent majorants to be constructed in iteration-dependent ways. Also, this paper overcomes an error in one of those earlier analyses. There are three main aspects in which our analysis builds upon previous work. First, our treatment relaxes many assumptions related to the structure of the cost function, feasible set, and tangent majorants. For example, the cost function can be nonconvex and the feasible set for the problem can be any convex set. Second, we propose convergence conditions, based on upper curvature bounds, that can be easier to verify than more standard continuity conditions. Furthermore, these conditions allow for considerable design freedom in the iteration-dependent behavior of the algorithm. Finally, we give an original characterization of the local region of convergence of MM algorithms based on connected (e.g., convex) tangent majorants. For such algorithms, cost function minimizers will locally attract the iterates over larger neighborhoods than typically is guaranteed with other methods. This expanded treatment widens the scope of the MM algorithm designs that can be considered for signal and image processing applications, allows us to verify the convergent behavior of previously published algorithms, and gives a fuller understanding overall of how these algorithms behave.
Keywords :
convergence of numerical methods; expectation-maximisation algorithm; image processing; minimisation; signal processing; convergence condition; expectation-maximization algorithm; image processing; iteration-dependent majorize-minimize algorithms; majorize-minimize optimization; signal processing; tangent majorant function; upper curvature bounds; Algorithm design and analysis; Convergence; Cost function; Expectation-maximization algorithms; Image processing; Jacobian matrices; Maximum likelihood estimation; Signal processing; Signal processing algorithms; Statistics; Expectation-maximization (EM); SAGE; majorize-minimize (MM); optimization transfer; Algorithms; Image Enhancement; Image Interpretation, Computer-Assisted; Likelihood Functions; Positron-Emission Tomography; Reproducibility of Results; Sensitivity and Specificity;