In this paper, we study the problem of stabilizing a nonlinear control system by means of a feedback control law, in cases where the entire state of the system is not available for measurement. The proposed method of stabilization consists of three parts: 1) determine a stabilizing control law based on state feedback, assuming the state vector

can be measured; 2) construct a state detection mechanism, which generates a vector

such that

as

and 3) apply the previously determined control law to

. This scheme is well established for linear time-invariant systems, and its global convergence has previously been studied in the case of nonlinear systems. Hence, the contribution of this paper is in showing that such a scheme works in the absence of any linearity assumptions, and in studying both local asymptotic stability and global asymptotic stability.