A quantizer

divides the range [0, 1] of a random variable

into

quantizing intervals the

th such interval having length

. We define the quantization error for a particular value of

(unusually) as the length of the quantizing interval in which

finds itself, and measure quantizer performance (unusually) by the

th mean value of the quantizing interval lengths

, averaging with respect to the distribution function

of the random variable

.

is defined to be an optimum quantizer if

for all

. The unusual definitions restrict the results to bounded random variables, but lead to general and precise results. We define a class

of quasi-optimum quantizers;

is in

if the different intervals

make equal contributions to the mean

th power of the interval size so that

is constant for all

. Theorems 1, 2, 3, and 4 prove that

exists and is unique for given

, and

: that

, where

is the density of the absolutely continuous part of the distribution function

of

, and

: that

as

; and that if

for finite

, then

.