Every Moment Counts: Dense Detailed Labeling of Actions in Complex Videos

Serena Yeung, Olga Russakovsky, Ning Jin, Mykhaylo Andriluka, Greg Mori, Li Fei-Fei

Research output: Contribution to journalArticle

67 Scopus citations

Abstract

Every moment counts in action recognition. A comprehensive understanding of human activity in video requires labeling every frame according to the actions occurring, placing multiple labels densely over a video sequence. To study this problem we extend the existing THUMOS dataset and introduce MultiTHUMOS, a new dataset of dense labels over unconstrained internet videos. Modeling multiple, dense labels benefits from temporal relations within and across classes. We define a novel variant of long short-term memory deep networks for modeling these temporal relations via multiple input and output connections. We show that this model improves action labeling accuracy and further enables deeper understanding tasks ranging from structured retrieval to action prediction.

Original languageEnglish (US)
Pages (from-to)375-389
Number of pages15
JournalInternational Journal of Computer Vision
Volume126
Issue number2-4
DOIs
StatePublished - Apr 1 2018
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Every Moment Counts: Dense Detailed Labeling of Actions in Complex Videos'. Together they form a unique fingerprint.

  • Cite this