DocumentCode :
638982
Title :
Learning image saliency from human touch behaviors
Author :
Shaomin Fang ; Yijuan Lu ; Xinmei Tian
Author_Institution :
Dept. of Comput. Sci., Texas State Univ. - San Marcos, San Marcos, TX, USA
fYear :
2013
fDate :
15-19 July 2013
Firstpage :
1
Lastpage :
4
Abstract :
The concept of touch saliency was recently introduced to generate image saliency maps based on human simple zoom behavior on touch devices. However, when browsing images on touch screen, users tend to apply a variety of touch behaviors such as pinch zoom, tap, double tap zoom, and scroll. Do these different behaviors correspond to different human attentions? Which behaviors are highly correlated with human eye fixation? How to learn a good image saliency map from various/multiple human behaviors? In this work, we design and conduct a series of studies to address these open questions. We also propose a novel touch saliency learning approach to derive a good image saliency map from a variety of human touch behaviors by using machine learning algorithm. The experimental results demonstrate the validity of our study and the potential and effectiveness of the proposed approach.
Keywords :
image processing; learning (artificial intelligence); double tap zoom; human attentions; human behaviors; human eye fixation; human simple zoom behavior; human touch behaviors; image browsing; image saliency maps; learning image saliency; machine learning algorithm; pinch zoom; touch devices; touch saliency learning approach; touch screen; Accuracy; Correlation; Image coding; Mobile handsets; Testing; Training; Visualization; Touch saliency; touch behaviors; visual saliency;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Multimedia and Expo Workshops (ICMEW), 2013 IEEE International Conference on
Conference_Location :
San Jose, CA
Type :
conf
DOI :
10.1109/ICMEW.2013.6618249
Filename :
6618249
Link To Document :
بازگشت