The performance of a multiclass maximum likelihood decision rule is analyzed, when inaccurate versions of the true probability density functions are used. A general bound to the error probability is developed, and it is valid for both finite observation size

and

. A necessary and sufficient condition is developed for the bound to be less than one and to converge exponentially to zero, assuming that we exclude the case of equality between informational divergence expressions. The condition is given in terms of the information divergence per sample, both for the finite

and asymptotic case. As long as the inaccurate density lies in a "tolerance region" around the true density of the class, exponential convergence of the error to zero is maintained. Specific expressions for the bounds and informational divergence are obtained for homogeneous Markov chain observations and Gaussian stationary process observations in discrete time. The computational complexity of evaluating the asymptotic bounding expression for the

-dimensional Gaussian process case is shown to be

, which is much smaller than the complexity

required for the evaluation of the bound for finite sample size

.