Title :
An Efficient Selective Perceptual-Based Super-Resolution Estimator
Author :
Karam, Lina J. ; Sadaka, Nabil G. ; Ferzli, Rony ; Ivanovski, Zoran A.
Author_Institution :
Sch. of Electr., Comput., & Energy Eng., Arizona State Univ., Tempe, AZ, USA
Abstract :
In this paper, a selective perceptual-based (SELP) framework is presented to reduce the complexity of popular super-resolution (SR) algorithms while maintaining the desired quality of the enhanced images/video. A perceptual human visual system model is proposed to compute local contrast sensitivity thresholds. The obtained thresholds are used to select which pixels are super-resolved based on the perceived visibility of local edges. Processing only a set of perceptually significant pixels reduces significantly the computational complexity of SR algorithms without losing the achievable visual quality. The proposed SELP framework is integrated into a maximum-a posteriori-based SR algorithm as well as a fast two-stage fusion-restoration SR estimator. Simulation results show a significant reduction on average in computational complexity with comparable signal-to-noise ratio gains and visual quality.
Keywords :
computational complexity; image enhancement; image fusion; image resolution; image restoration; maximum likelihood estimation; video signal processing; SELP framework; computational complexity; image-video enhancement; local contrast sensitivity thresholds; local edge perceived visibility; maximum-a posteriori-based SR algorithm; perceptual human visual system model; pixel selection; selective perceptual-based super-resolution estimator; signal-to-noise ratio gains; two-stage fusion-restoration SR estimator; Computational modeling; Image edge detection; Image reconstruction; Maximum a posteriori estimation; Maximum likelihood estimation; Visual system; Edge detection; human visual system (HVS); maximum a posteriori (MAP) estimator; maximum-likelihood estimator; perceptual quality; reduced complexity; super-resolution (SR);
Journal_Title :
Image Processing, IEEE Transactions on
DOI :
10.1109/TIP.2011.2159324