• DocumentCode
    3748944
  • Title

    Sequence to Sequence -- Video to Text

  • Author

    Subhashini Venugopalan;Marcus Rohrbach;Jeffrey Donahue;Raymond Mooney;Trevor Darrell;Kate Saenko

  • Author_Institution
    Univ. of Texas at Austin, Austin, TX, USA
  • fYear
    2015
  • Firstpage
    4534
  • Lastpage
    4542
  • Abstract
    Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).
  • Keywords
    "Decoding","Encoding","Feature extraction","Visualization","Recurrent neural networks","Optical imaging","Mathematical model"
  • Publisher
    ieee
  • Conference_Titel
    Computer Vision (ICCV), 2015 IEEE International Conference on
  • Electronic_ISBN
    2380-7504
  • Type

    conf

  • DOI
    10.1109/ICCV.2015.515
  • Filename
    7410872