DocumentCode :
542306
Title :
On feature extraction by mutual information maximization
Author :
Torkkola, Kari
Author_Institution :
Motorola Labs, 7700 South River Parkway, MD ML28, Tempe AZ 85284, USA
Volume :
1
fYear :
2002
fDate :
13-17 May 2002
Abstract :
In order to learn discriminative feature transforms, we discuss mutual information between class labels and transformed features as a criterion. Instead of Shannon´s definition we use measures based on Renyi entropy, which lends itself into an efficient implementation and an interpretation of “information potentials” and “information forces” induced by samples of data. This paper presents two routes towards practical usability of the method, especially aimed to large databases: The first is an on-line stochastic gradient algorithm, and the second is based on approximating class densities in the output space by Gaussian mixture models.
Keywords :
Computational modeling; Entropy; Feature extraction; Helium; Hidden Markov models; Markov processes; Transforms;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech, and Signal Processing (ICASSP), 2002 IEEE International Conference on
Conference_Location :
Orlando, FL, USA
ISSN :
1520-6149
Print_ISBN :
0-7803-7402-9
Type :
conf
DOI :
10.1109/ICASSP.2002.5743865
Filename :
5743865
Link To Document :
بازگشت