Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published June 2015 | Accepted Version
Book Section - Chapter Open

Finding action tubes

Abstract

We address the problem of action detection in videos. Driven by the latest progress in object detection from 2D images, we build action models using rich feature hierarchies derived from shape and kinematic cues. We incorporate appearance and motion in two ways. First, starting from image region proposals we select those that are motion salient and thus are more likely to contain the action. This leads to a significant reduction in the number of regions being processed and allows for faster computations. Second, we extract spatio-temporal feature representations to build strong classifiers using Convolutional Neural Networks. We link our predictions to produce detections consistent in time, which we call action tubes. We show that our approach outperforms other techniques in the task of action detection.

Additional Information

This work was supported by the Intel Visual Computing Center and the ONR SMARTS MURI N000140911051. The GPU's used in this research were generously donated by the NVIDIA Corporation.

Attached Files

Accepted Version - Gkioxari_Finding_Action_Tubes_2015_CVPR_paper.pdf

Files

Gkioxari_Finding_Action_Tubes_2015_CVPR_paper.pdf
Files (2.9 MB)

Additional details

Created:
September 15, 2023
Modified:
October 23, 2023