DocumentCode :
983643
Title :
Continuously Differentiable Sample-Spacing Entropy Estimation
Author :
Ozertem, Umut ; Uysal, Ismail ; Erdogmus, Deniz
Author_Institution :
Yahoo Inc., Sunnyvale, CA
Volume :
19
Issue :
11
fYear :
2008
Firstpage :
1978
Lastpage :
1984
Abstract :
The insufficiency of using only second-order statistics and premise of exploiting higher order statistics of the data has been well understood, and more advanced objectives including higher order statistics, especially those stemming from information theory, such as error entropy minimization, are now being studied and applied in many contexts of machine learning and signal processing. In the adaptive system training context, the main drawback of utilizing output error entropy as compared to correlation-estimation-based second-order statistics is the computational load of the entropy estimation, which is usually obtained via a plug-in kernel estimator. Sample-spacing estimates offer computationally inexpensive entropy estimators; however, resulting estimates are not differentiable, hence, not suitable for gradient-based adaptation. In this brief paper, we propose a nonparametric entropy estimator that captures the desirable properties of both approaches. The resulting estimator yields continuously differentiable estimates with a computational complexity at the order of those of the sample-spacing techniques. The proposed estimator is compared with the kernel density estimation (KDE)-based entropy estimator in the supervised neural network training framework with computation time and performance comparisons.
Keywords :
adaptive estimation; adaptive systems; computational complexity; correlation methods; error statistics; higher order statistics; learning (artificial intelligence); minimum entropy methods; sampling methods; adaptive system training; computational complexity; continuously differentiable sample-spacing nonparametric entropy estimation; correlation-estimation-based second-order statistic; error entropy minimization; gradient-based adaptation; higher order statistic; information theory; kernel density estimation-based kernel estimator; machine learning; signal processing; supervised neural network training framework; Adaptive signal processing; Adaptive systems; Computational complexity; Entropy; Error analysis; Higher order statistics; Information theory; Kernel; Machine learning; Yield estimation; Entropy estimation; minimum error entropy (MEE) criterion; supervised neural network training; Algorithms; Computer Simulation; Data Interpretation, Statistical; Entropy; Models, Theoretical; Neural Networks (Computer); Pattern Recognition, Automated; Sample Size;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/TNN.2008.2006167
Filename :
4668662
Link To Document :
بازگشت