DocumentCode :
1763548
Title :
A Scalable Projective Scaling Algorithm for l_{p} Loss With Convex Penalizations
Author :
Hongbo Zhou ; Qiang Cheng
Author_Institution :
Dept. of Comput. Sci., Southern Illinois Univ. Carbondale, Carbondale, IL, USA
Volume :
26
Issue :
2
fYear :
2015
fDate :
Feb. 2015
Firstpage :
265
Lastpage :
276
Abstract :
This paper presents an accurate, efficient, and scalable algorithm for minimizing a special family of convex functions, which have a lp loss function as an additive component. For this problem, well-known learning algorithms often have well-established results on accuracy and efficiency, but there exists rarely any report on explicit linear scalability with respect to the problem size. The proposed approach starts with developing a second-order learning procedure with iterative descent for general convex penalization functions, and then builds efficient algorithms for a restricted family of functions, which satisfy the Karmarkar´s projective scaling condition. Under this condition, a light weight, scalable message passing algorithm (MPA) is further developed by constructing a series of simpler equivalent problems. The proposed MPA is intrinsically scalable because it only involves matrix-vector multiplication and avoids matrix inversion operations. The MPA is proven to be globally convergent for convex formulations; for nonconvex situations, it converges to a stationary point. The accuracy, efficiency, scalability, and applicability of the proposed method are verified through extensive experiments on sparse signal recovery, face image classification, and over-complete dictionary learning problems.
Keywords :
compressed sensing; convex programming; face recognition; image classification; learning (artificial intelligence); matrix multiplication; message passing; MPA; convex penalization function; dictionary learning problem; face image classification; learning algorithm; lp loss; matrix-vector multiplication; scalable message passing algorithm; sparse signal recovery; Approximation methods; Convergence; Convex functions; Optimization; Scalability; Solids; Vectors; $l_{p}$ loss function; Convex function; Karmarkar´s projective scaling condition; lp loss function; message passing algorithm (MPA); minimization-majorization (MM); nonconvex; scalability; scalability.;
fLanguage :
English
Journal_Title :
Neural Networks and Learning Systems, IEEE Transactions on
Publisher :
ieee
ISSN :
2162-237X
Type :
jour
DOI :
10.1109/TNNLS.2014.2314129
Filename :
6808493
Link To Document :
بازگشت