[1]
Thanthry, N., Emmuadi, I., Srikumar, A., Namuduri, K., Pendse, R., SVSS: Intelligent Video Surveillance System for Aircraft, IEEE Aerospace and Electronic Systems Magazine, Vol. 24, Issue 10, October 2009, pp.23-29.
DOI: 10.1109/maes.2009.5317783
Google Scholar
[2]
Collins R., et al. A system for video surveillance and monitoring: VSAM final report. Carnegie Mellon University: Technical Report CMU, (2000).
Google Scholar
[3]
Kenech F, Ubiquitous computing challenges in recognizing and predicting human activity, Proceedings of 2004 IEEE International Conference on Fuzzy Systems, pp.3-7.
Google Scholar
[4]
C. Bregler, J. Malik, and K. Pullen. Twist based acquisition and tracking of animal and human kinematics. IJCV, 56(3): 179–194, Feb., (2004).
DOI: 10.1023/b:visi.0000011203.00237.9b
Google Scholar
[5]
Y. Yacoob and M.J. Black. Parameterized modeling and recognition of activities. CVIU, 73(2): 232–247, Feb., (1999).
Google Scholar
[6]
M.A. Isard and A. Blake. A mixed-state Condensation tracker with automatic model switching. In ICCV, pages 107–112, (1998).
Google Scholar
[7]
C. Rao and M. Shah. View-invariance in action recognition. In CVPR, pages II: 316–322, (2001).
Google Scholar
[8]
Y. Song, L. Goncalves, and P. Perona. Unsupervised learning of human motion. PAMI, 25(7): 814–827, (2003).
Google Scholar
[9]
G. Mori and J. Malik. Estimating human body configurations using shape context matching. In ECCV, page III: 666 ff., (2002).
DOI: 10.1007/3-540-47977-5_44
Google Scholar
[10]
J. Sullivan and S. Carlsson. Recognizing and tracking human action. In ECCV, page I: 629, (2002).
Google Scholar
[11]
A. Efros, A. Berg, G. Mori, and J. Malik. Recognizing action at a distance. In ICCV, pages 726–733, Nice, France, (2003).
Google Scholar
[12]
J. -H. Ahn, et al., Human tracking and silhouette extraction for human robot interaction systems, Pattern Analysis & Applications, vol. 12, pp.167-177, (2009).
DOI: 10.1007/s10044-008-0112-3
Google Scholar
[13]
Zhao, T., R. Nevatia, and B. Wu, Segmentation and tracking of multiple humans in crowded environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008. 30(7): pp.1198-1211.
DOI: 10.1109/tpami.2007.70770
Google Scholar
[14]
Yang Wang, and Greg Mori, Human Action Recognition by Semi Latent Topic Models, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 31, pp.1762-1774, (2009).
DOI: 10.1109/tpami.2009.43
Google Scholar
[15]
F. Lv and R. Nevatia, Single view human action recognition using key pose matching and viterbi path searching, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2007, p.1–8.
DOI: 10.1109/cvpr.2007.383131
Google Scholar
[16]
J. Yin, et al., Spatio-Temporal Event Detection Using Dynamic Conditional Random Fields, in Con! Rec. 2009 the International Joint Conferences on Artificial Intelligence, pp.1321-1326.
Google Scholar
[17]
R. Natarajan and P. Nevatia, View and scale invariant action recognition using multi-view shape-flow models, in Proc. IEEE Conf. Comput. Vision . Pattern Recognit., 2008, p.1–8.
DOI: 10.1109/cvpr.2008.4587716
Google Scholar
[18]
N. Robertson and I. Reid, Behavior understanding in video: combined method, in Proc. IEEE Conf. Comput. Vis., 2005, vol. 1, p.808–815.
Google Scholar
[19]
M. Yamamoto, H. Mitomi, F. Fujiwara, and T. Sato, Bayesian classification of task-oriented actions based on stochastic context-free grammar, in Proc. IEEE Conf. Automat. Face Gesture Recognit., 2006, p.317–323.
DOI: 10.1109/fgr.2006.28
Google Scholar
[20]
A. Bobick and J. Davis, The recognition of human movement using temporal templates, IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 3, p.257–267, Mar. (2001).
DOI: 10.1109/34.910878
Google Scholar
[21]
D. Weinland, R. Ronfard, and E. Boyer, Automatic discovery of action taxonomies from multiple views, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2006, vol. 2, p.1639–1645.
DOI: 10.1109/cvpr.2006.65
Google Scholar
[22]
T. Mori, Y. Segawa, M. Shimosaka, and T. Sato, Hierarchical recognition of daily human actions based on continuous hidden Markov Models, in Proc. IEEE Conf. Automat. Face Gesture Recognit., 2004, p.779–784.
DOI: 10.1109/afgr.2004.1301629
Google Scholar