Title :
Parallel Multi-task Learning
Author_Institution :
Dept. of Comput. Sci. &
Abstract :
In this paper, we develop parallel algorithms for a family of regularized multi-task methods which can model task relations under the regularization framework. Since those multi-task methods cannot be parallelized directly, we use the FISTA algorithm, which in each iteration constructs a surrogate function of the original problem by utilizing the Lipschitz structure of the objective function based on the solution in the last iteration, to solve it. Specifically, we investigate the dual form of the objective function in those methods by adopting the hinge, e-insensitive, and square losses to deal with multi-task classification and regression problems, and then utilize the Lipschitz structure to construct the surrogate function for the dual forms. The surrogate functions constructed in the FISTA algorithm are founded to be decomposable, leading to parallel designs for those multi-task methods. Experiments on several benchmark datasets show that the convergence of the proposed algorithms is as fast as that of SMO-style algorithms and the parallel design can speedup the computation.
Keywords :
"Covariance matrices","Algorithm design and analysis","Linear programming","Fasteners","Parallel algorithms","Kernel","Convergence"
Conference_Titel :
Data Mining (ICDM), 2015 IEEE International Conference on
DOI :
10.1109/ICDM.2015.130