Let

denote the Bayes risk (minimum expected loss) for the problem of estimating

, given an observed random variable

, joint probability distribution

, and loss function

. Consider the problem in which the only knowledge of

is that which can be inferred from samples

, where the

are independently identically distributed according to

. Let the nearest neighbor estimate of the parameter

associated with an observation

be defined to be the parameter

associated with the nearest neighbor

to

. Let R be the large sample risk of the nearest neighbor rule. It will be shown, for a wide range of probability distributions, that

for metric loss functions and

for squared-error loss functions. A simple estimator using the nearest

neighbors yields

in the squared-error loss case. In this sense, it can be said that at least haft the information in the infinite training set is contained in the nearest neighbor. This paper is an extension of earlier work[q from the problem of classification by the nearest neighbor rule to that of estimation. However, the unbounded loss functions in the estimation problem introduce additional problems concerning the convergence of the unconditional risk. Thus some work is devoted to the investigation of natural conditions on the underlying distribution assuring the desired convergence.