Author_Institution :
Dept. of Stat., Stanford Univ., Stanford, CA, USA
Abstract :
Many papers studying compressed sensing consider the noisy underdetermined system of linear equations: y = Ax0+ z, with n × N measurement matrix A, n <; N, and Gaussian white noise z ~ N(0,σ2 I). Both y and A are known, both x0 and z are unknown, and we seek an approximation to x0; we let δ = n/N ϵ (0,1) denote the undersampling fraction. In the popular strict sparsity model of compressed sensing, such papers further assume that x0 has at most a specified fraction ε of nonzeros. In this paper, we relax the assumption of strict sparsity by assuming the vector x0 is close in mean p-th power to a sparse signal. We study how this relaxation affects the performance of ℓ1-penalized ℓ2 minimization, in which the reconstruction x1,λ solves min min ||y - Ax||22/2 + λ||x||1. We study asymptotic mean-squared error (AMSE), the large-system limit of the MSE of x1, λ. Using recently developed tools based on Approximate Message Passing (AMP), we develop expressions for minimax AMSE Mϵ,p*(δ, ξ, σ) - max over all approximately sparse signals, min over penalizations λ, where ξ measures the deviation from strict sparsity. There is of course a phase transition curve δ* = δ*(ε); only above this curve, δ >; δ*(ε), can we have exact recovery even in the noiseless-data strict-sparsity setting. It turns out that the minimax AMSE can be characterized succinctly by a coefficient sensp*(ε, δ) which we refer to as the sparsity-relaxation sensitivity. We give explicit expressions for sensp*(ε, δ), compute them, and interpret them. Our- approach yields precise formulas in place of loose order bounds based on restricted isometry property and instance optimality results. Our formulas reveal that sensitivity is finite everywhere exact recovery is possible under strict sparsity, and that sensitivity to added random noise in the measurements y is smaller than the sensitivity to adding a comparable amount of noise to the estimand x0. Our methods can also treat the mean q-th power loss. The methods themselves are based on minimax decision theory and seem of independent interest.
Keywords :
compressed sensing; mean square error methods; approximate message passing; asymptotic mean-squared error; compressed sensing performance; instance optimality results; linear equations; loose order bounds; minimax AMSE; minimax decision theory; noiseless-data strict-sparsity setting; noisy underdetermined system; phase transition curve; power loss; random noise; restricted isometry property; sparse signal; sparsity-relaxation sensitivity; strict sparsity model; undersampling fraction; Approximation methods; Compressed sensing; Equations; Minimization; Noise; Noise measurement; Sensitivity;