• DocumentCode
    11400
  • Title

    Gaussian Kernel Width Optimization for Sparse Bayesian Learning

  • Author

    Mohsenzadeh, Yalda ; Sheikhzadeh, Hamid

  • Author_Institution
    Dept. of Electr. Eng., Amirkabir Univ. of Technol., Tehran, Iran
  • Volume
    26
  • Issue
    4
  • fYear
    2015
  • fDate
    Apr-15
  • Firstpage
    709
  • Lastpage
    719
  • Abstract
    Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters.
  • Keywords
    Gaussian processes; belief networks; computational complexity; convergence; expectation-maximisation algorithm; support vector machines; Gaussian kernel width optimization; RVM; classification application; computational complexity; convergence control; cross-validation approach; expectation-maximization approach; parameterized model; performance dependency reduction; regression application; relevance vector machine; sparse Bayesian learning; sparse kernel methods; training procedure; Convergence; Kernel; Optimization; Prediction algorithms; Standards; Training; Vectors; Adaptive kernel learning (AKL); expectation maximization (EM); kernel width optimization; regression; relevance vector machine (RVM); sparse Bayesian learning; supervised kernel methods; supervised kernel methods.;
  • fLanguage
    English
  • Journal_Title
    Neural Networks and Learning Systems, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    2162-237X
  • Type

    jour

  • DOI
    10.1109/TNNLS.2014.2321134
  • Filename
    6818403