Title :
Incremental Reformulated Automatic Relevance Determination
Author :
Shutin, Dmitriy ; Kulkarni, Sanjeev R. ; Poor, H. Vincent
Author_Institution :
Dept. of Electr. Eng., Princeton Univ., Princeton, NJ, USA
Abstract :
In this work, the relationship between the incremental version of sparse Bayesian learning (SBL) with automatic relevance determination (ARD)-a fast marginal likelihood maximization (FMLM) algorithm-and a recently proposed reformulated ARD scheme is established. The FMLM algorithm is an incremental approach to SBL with ARD, where the corresponding objective function-the marginal likelihood-is optimized with respect to the parameters of a single component provided that the other parameters are fixed; the corresponding maximizer is computed in closed form, which enables a very efficient SBL realization. Wipf and Nagarajan have recently proposed a reformulated ARD (R-ARD) approach, which optimizes the marginal likelihood using auxiliary upper bounding functions. The resulting algorithm is then shown to correspond to a series of reweighted l1-constrained convex optimization problems. This correspondence establishes and analyzes the relationship between the FMLM and R-ARD schemes. Specifically, it is demonstrated that the FMLM algorithm realizes an incremental approach to the optimization of the R-ARD objective function. This relationship allows deriving the R-ARD pruning conditions similar to those used in the FMLM scheme to analytically detect components that are to be removed from the model, thus regulating the estimated signal sparsity and accelerating the algorithm convergence.
Keywords :
Bayes methods; convergence; convex programming; learning (artificial intelligence); maximum likelihood estimation; signal processing; FMLM algorithm; R-ARD approach; R-ARD objective function; R-ARD pruning conditions; SBL realization; algorithm convergence; auxiliary upper bounding functions; estimated signal sparsity; fast marginal likelihood maximization algorithm; incremental reformulated automatic relevance determination; incremental version; reformulated ARD approach; reformulated ARD scheme; reweighted l1-constrained convex optimization problems; sparse Bayesian learning; Acceleration; Convergence; Covariance matrix; Optimization; Signal to noise ratio; Vectors; Automatic relevance determination; fast marginal likelihood maximization; sparse Bayesian learning;
Journal_Title :
Signal Processing, IEEE Transactions on
DOI :
10.1109/TSP.2012.2200478