DocumentCode :
1551397
Title :
An overview of statistical learning theory
Author :
Vapnik, Vladimir N.
Author_Institution :
AT&T Labs-Res., Red Bank, NJ, USA
Volume :
10
Issue :
5
fYear :
1999
fDate :
9/1/1999 12:00:00 AM
Firstpage :
988
Lastpage :
999
Abstract :
Statistical learning theory was introduced in the late 1960´s. Until the 1990´s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990´s new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems
Keywords :
estimation theory; generalisation (artificial intelligence); learning (artificial intelligence); statistical analysis; function estimation; generalization conditions; multidimensional function estimation; statistical learning theory; support vector machines; Algorithm design and analysis; Loss measurement; Machine learning; Multidimensional systems; Pattern recognition; Probability distribution; Risk management; Statistical learning; Support vector machines;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/72.788640
Filename :
788640
Link To Document :
بازگشت