• DocumentCode
    20982
  • Title

    Universal Approximation with Convex Optimization: Gimmick or Reality? [Discussion Forum]

  • Author

    Principe, Jose C. ; Badong Chen

  • Author_Institution
    Dept. of Electr. & Comput. Eng., Univ. of Florida, Gainesville, FL, USA
  • Volume
    10
  • Issue
    2
  • fYear
    2015
  • fDate
    May-15
  • Firstpage
    68
  • Lastpage
    77
  • Abstract
    This paper surveys in a tutorial fashion the recent history of universal learning machines starting with the multilayer perceptron. The big push in recent years has been on the design of universal learning machines using optimization methods linear in the parameters, such as the Echo State Network, the Extreme Learning Machine and the Kernel Adaptive filter. We call this class of learning machines convex universal learning machines or CULMs. The purpose of the paper is to compare the methods behind these CULMs, highlighting their features using concepts of vector spaces (i.e. basis functions and projections), which are easy to understand by the computational intelligence community. We illustrate how two of the CULMs behave in a simple example, and we conclude that indeed it is practical to create universal mappers with convex adaptation, which is an improvement over backpropagation.
  • Keywords
    convex programming; learning (artificial intelligence); multilayer perceptrons; CULM; basis functions; computational intelligence community; convex optimization; convex universal learning machines; multilayer perceptron; projections; universal approximation; universal mappers; vector spaces; Adaptive filters; Kernel adaptive filters; Learning systems; Multilayer perceptrons; Optimization methods; Tutorials;
  • fLanguage
    English
  • Journal_Title
    Computational Intelligence Magazine, IEEE
  • Publisher
    ieee
  • ISSN
    1556-603X
  • Type

    jour

  • DOI
    10.1109/MCI.2015.2405352
  • Filename
    7083777