Author/Authors :
Malo، نويسنده , , J.، نويسنده , , Gutierrez، نويسنده , , J.، نويسنده , , Epifanio، نويسنده , , I.، نويسنده , , Ferri، نويسنده , , F.J.، نويسنده , , Artigas، نويسنده , , J.M.، نويسنده ,
Abstract :
In this paper, a multigrid motion compensation video
coder based on the current human visual system (HVS) contrast
discrimination models is proposed. A novel procedure for the encoding
of the prediction errors has been used. This procedure restricts
the maximum perceptual distortion in each transform coefficient.
This subjective redundancy removal procedure includes
the amplitude nonlinearities and some temporal features of human
perception. A perceptually weighted control of the adaptive motion
estimation algorithm has also been derived from this model. Perceptual
feedback in motion estimation ensures a perceptual balance
between the motion estimation effort and the redundancy removal
process. The results show that this feedback induces a scaledependent
refinement strategy that gives rise to more robust and
meaningful motion estimation, which may facilitate higher level
sequence interpretation. Perceptually meaningful distortion measures
and the reconstructed frames show the subjective improvements
of the proposed scheme versus an H.263 scheme with unweighted
motion estimation and MPEG-like quantization.
Keywords :
perceptual quantization , nonlinearhuman vision model , video coding. , Entropy constrained motion estimation