Abstract :
Speakers monitor their speech output by listening to their own voice. However, signers do not look
directly at their hands and cannot see their own face. We investigated the importance of a visual
perceptual loop for sign language monitoring by examining whether changes in visual input alter sign
production. Deaf signers produced American Sign Language (ASL) signs within a carrier phrase under
five conditions: blindfolded, wearing tunnel-vision goggles, normal (citation) signing, shouting, and
informal signing. Three-dimensional movement trajectories were obtained using an Optotrak Certus
system. Informally produced signs were shorter with less vertical movement. Shouted signs were
displaced forward and to the right and were produced within a larger volume of signing space, with
greater velocity, greater distance traveled, and a longer duration. Tunnel vision caused signers to
produce less movement within the vertical dimension of signing space, but blind and citation signing
did not differ significantly on any measure, except duration. Thus, signers do not “sign louder” when
they cannot see themselves, but they do alter their sign production when vision is restricted. We
hypothesize that visual feedback serves primarily to fine-tune the size of signing space rather than as
input to a comprehension-based monitor.