The sensitivity effect in the discrete-time modeling of continuous-time systems in digital identification schemes is considered as a function of the sampling interval

. It is shown that the common assumption that the higher the sampling rate the better a discrete-time model represents a continuous-time system is not true in general. Rather, it is shown that an optimum sampling rate exists which minimizes the effect of estimation errors, Experimental results are presented which confirm the existence of this optimum sampling interval. Close confirmation is found between the theoretically predicted optimum sampling interval and the experimental results.