DocumentCode :
1842911
Title :
An introduction to information theoretic learning
Author :
Principe, Jose C. ; Xu, Dongxin
Author_Institution :
Lab. of Comput. NeuroEng., Florida Univ., Gainesville, FL, USA
Volume :
3
fYear :
1999
fDate :
1999
Firstpage :
1783
Abstract :
Learning from examples has been traditionally done with correlation or with the mean square error (MSE) criterion, in spite of the fact that learning is intrinsically related with the extraction of information from examples. The problem is that Shannon (1948) introduced the idea of information entropy which has a sound theoretical foundation but is not easy to implement in a learning from examples scenario. In this paper Renyi´s entropy definition (1976) is used and integrated with a nonparametric estimator of the probability density function (Parzen window). The experimental results on blind source separation confirm the theory. Although the work is preliminary, the “information potential” method is rather general and will have many applications
Keywords :
entropy; learning by example; neural nets; probability; Parzen window; blind source separation; example-based learning; information entropy; information extraction; information potential method; information theoretic learning; neural nets; nonparametric estimator; probability density function; Data mining; Density functional theory; Energy measurement; Information entropy; Laboratories; Machine learning; Mean square error methods; Mutual information; Neural engineering; Signal processing algorithms;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1999. IJCNN '99. International Joint Conference on
Conference_Location :
Washington, DC
ISSN :
1098-7576
Print_ISBN :
0-7803-5529-6
Type :
conf
DOI :
10.1109/IJCNN.1999.832648
Filename :
832648
Link To Document :
بازگشت