Our objective is to train SVM based Localized Multiple Kernel Learning with arbitrary
-norm constraint using the alternating optimization between the standard SVM solvers with the localized combination of base kernels and associated sample-specific kernel weights. Unfortunately, the latter forms a difficult
-norm constraint quadratic optimization. In this letter, by approximating the
-norm using Taylor expansion, the problem of updating the localized kernel weights is reformulated as a non-convex quadratically constraint quadratic programming, and then solved via associated convex Semi-Definite Programming relaxation. Experiments on ten benchmark machine learning datasets demonstrate the advantages of our approach.