Abstract :
The p-values observed in independent tests on some hypothesis are, under the overall null hypothesis, a sample from the standard uniform. Thus uniformity tests are an important tool in meta-analysis, and the family of probability density functions fXm(x) = (mx − m−2 over 2) I(0,1)(x), m ∈ [−2, 0] provides an interesting theoretical framework to assess uniformity. In fact, H0: m = 0 stands for uniformity, and the alternative HA: m ∈ [−2, 0] should favor rejection of the overall null hypothesis, since the smaller m is, the more probable is to observe p-values near 0. But computational sample augmentation techniques using this family has the unexpected result of decreasing power. We investigate the usefulness of products and powers of products of variables from that family in testing uniformity. We show, using simulation, that those transformations also have a negative impact on power of uniformity goodness-of-fit tests. The Kakutani inner product of the measures under the null hypothesis and under the alternative hypothesis confirms increased collinearity and poorer fit. In fact, the techniques investigated pull the entropy towards 0, which is the entropy of the standard uniform random variable. This enlightens why testing uniformity can be rather misleading.
Keywords :
"Entropy","Random variables","Testing","Probability density function","Computational modeling","Transforms","Electronic mail"