DocumentCode :
928199
Title :
Quantization complexity and training sample size in detection
Author :
Kazakos, Dimitri
Volume :
24
Issue :
2
fYear :
1978
fDate :
3/1/1978 12:00:00 AM
Firstpage :
229
Lastpage :
237
Abstract :
For the k -hypothesis detection problem, it is shown that, among the k -classes of probability density functions with m fixed quantiles, histograms achieve the least favorable performance as measured by the probability of correct detection and Chernoff distance. It is assumed that the m cell probabilities are estimated using n training samples per class. With the aid of the estimated cell probabilities, new observations are processed. A distribution-free upper bound to the probability of \\epsilon -deviation between the actual probability of correct detection and the theoretical (known quantiles) probability is derived as a function of (m,n,\\epsilon,k,u_{o}) , where u_{o} is a uniform upper bound to the true class densities. The bound converges exponentially to zero as n \\rightarrow \\infty . Exponential convergence is obtained by choosing m = n^{\\alpha }, 0 < \\alpha < 1 . Hence, the rule m = n^{\\alpha } answers the long standing question of how to relate m and n in a distribution-free manner. The question of the optimal choice of a is also discussed.
Keywords :
Quantization (signal); Signal detection; Signal quantization; Circuit noise; Circuit testing; Circuits and systems; Detectors; Probability density function; Quantization; Robustness; Signal detection; Statistics; Upper bound;
fLanguage :
English
Journal_Title :
Information Theory, IEEE Transactions on
Publisher :
ieee
ISSN :
0018-9448
Type :
jour
DOI :
10.1109/TIT.1978.1055851
Filename :
1055851
Link To Document :
بازگشت