Abstract
Temporal action localization in untrimmed videos is an important but difficult task. Difficulties are encountered in the application of existing methods when modeling the temporal structures of videos. In the present study, we develop a novel method, referred to as the Gemini Network, for effective modeling of temporal structures and achieving high-performance temporal action localization. The significant improvements afforded by the proposed method are due to three major factors. First, temporal dependencies are explicitly distinguished as long-term temporal dependencies and short-term temporal dependencies and are separately captured by two dedicated subnets. Second, a long-range temporal dependency capture module combined with a self-adaptive pooling module is proposed to capture long-term temporal dependency. Third, the proposed method uses auxiliary supervision, with the auxiliary classifier losses affording additional constraints for improving the modeling capability of the network. As a demonstration of its effectiveness, the Gemini Network is used to achieve a state-of-the-art temporal action localization performance on two challenging datasets, namely, THUMOS14 and ActivityNet.
Original language | English (US) |
---|---|
Pages (from-to) | 4363-4375 |
Number of pages | 13 |
Journal | IEEE Transactions on Multimedia |
Volume | 23 |
DOIs | |
State | Published - 2021 |
All Science Journal Classification (ASJC) codes
- Signal Processing
- Media Technology
- Computer Science Applications
- Electrical and Electronic Engineering
Keywords
- Action localization
- convolutional neural networks
- video content analysis