Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published June 2012 | public
Book Section - Chapter

Social behavior recognition in continuous video

Abstract

We present a novel method for analyzing social behavior. Continuous videos are segmented into action 'bouts' by building a temporal context model that combines features from spatio-temporal energy and agent trajectories. The method is tested on an unprecedented dataset of videos of interacting pairs of mice, which was collected as part of a state-of-the-art neurophysiological study of behavior. The dataset comprises over 88 hours (8 million frames) of annotated videos. We find that our novel trajectory features, used in a discriminative framework, are more informative than widely used spatio-temporal features; furthermore, temporal context plays an important role for action recognition in continuous videos. Our approach may be seen as a baseline method on this dataset, reaching a mean recognition rate of 61.2% compared to the expert's agreement rate of about 70%.

Additional Information

© 2012 IEEE. Date of Current Version: 26 July 2012. Authors would like to thank R. Robertson for his careful work annotating the videos, as well as Dr. A. Steele for coordinating some of the annotations. We also would like to thank Dr. M. Maire for his valuable feedback on the paper. X.P.B.A. holds a postdoctoral fellowship from the Spanish ministry of education, Programa Nacional de Movilidad de Recursos Humanos del Plan Nacional de I-D+i 2008-2011. D.L. was supported by the Jane Coffin Child Memorial Foundation. P.P. and D.J.A. were supported by the Gordon and Betty Moore Foundation. D.J. is supported by the Howard Hughes Medical foundation. P.P. was also supported by ONR MURI Grant #N00014-10-1-0933.

Additional details

Created:
August 19, 2023
Modified:
October 18, 2023