A method for automatically controlling the threshold bias in a detector is described and analyzed. In Section I, the threshold bias problem is described: Bias is set for a constant false alarm rate (or a constant false alarm time). By "standard biasing" is meant the common practice of adjusting the required bias under the assumption that the noise is Gaussian and has a flat power spectrum. In Section II, the error that is made by standard biasing, if it turns out the Gaussian noise does not have a flat power spectrum, is given. In Section III, the automatic biasing method is given in the case where a constant false alarm time is required (its operation to maintain a constant false alarm rate is analogous). The device envisioned operates as follows: One bias level

is used as reference; the number of crossings per false alarm time

with positive slope of the noise envelope through

is averaged over a sufficiently long time to yield a stable value

. This count

serves to determine the threshold bias

. The level

changes only if the "long time" average count changes; it is specifically assumed that there is no response to instantaneous changes in

. It is shown that such a biasing method automatically adjusts to give a constant false alarm time (or rate), whenever the noise is Gaussian, and so has an advantage over the standard biasing method. In Section IV, the efficiency of both methods for non-Gaussian noise, is compared. In Section V, the probability of detection of a short (relative to the averaging time) "sure" signal with Rayleigh distributed amplitude is given when automatic biasing is used; due to the complexity of the expression obtained, no direct comparison has been made with the case where standard biasing is used.