Unsupervised Learning of Video Representations using LSTMs

Pages: 843 - 852
Published: Jul 6, 2015
Abstract
We use Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level...
Paper Details
Title
Unsupervised Learning of Video Representations using LSTMs
Published Date
Jul 6, 2015
Pages
843 - 852
Citation AnalysisPro
  • Scinapse’s Top 10 Citation Journals & Affiliations graph reveals the quality and authenticity of citations received by a paper.
  • Discover whether citations have been inflated due to self-citations, or if citations include institutional bias.