DocumentCode
3605443
Title
Describing Multimedia Content Using Attention-Based Encoder-Decoder Networks
Author
Kyunghyun Cho ; Courville, Aaron ; Bengio, Yoshua
Author_Institution
Inf. & Operational Res. Dept., Univ. de Montreal, Montréal, QC, Canada
Volume
17
Issue
11
fYear
2015
Firstpage
1875
Lastpage
1886
Abstract
Whereas deep neural networks were first mostly used for classification tasks, they are rapidly expanding in the realm of structured output problems, where the observed target is composed of multiple random variables that have a rich joint distribution, given the input. In this paper we focus on the case where the input also has a rich structure and the input and output structures are somehow related. We describe systems that learn to attend to different places in the input, for each element of the output, for a variety of tasks: machine translation, image caption generation, video clip description, and speech recognition. All these systems are based on a shared set of building blocks: gated recurrent neural networks and convolutional neural networks, along with trained attention mechanisms. We report on experimental results with these systems, showing impressively good performance and the advantage of the attention mechanism.
Keywords
learning (artificial intelligence); multimedia systems; recurrent neural nets; attention-based encoder-decoder networks; convolutional neural networks; deep neural networks; gated recurrent neural networks; image caption generation; machine translation; multimedia content; speech recognition; video clip description; Computational modeling; Context; Context modeling; Decoding; Mathematical model; Recurrent neural networks; Attention mechanism; deep learning; recurrent neural networks;
fLanguage
English
Journal_Title
Multimedia, IEEE Transactions on
Publisher
ieee
ISSN
1520-9210
Type
jour
DOI
10.1109/TMM.2015.2477044
Filename
7243334
Link To Document