Real Time Human Action Recognition using Kinematic State Model

Authors

  • Geetanjali Vinayak Kale Pune Inst
  • Varsha H Patil Matoshri College of Engineering & Research Center (MCERC), Nashik, Maharashtra, India

DOI:

https://doi.org/10.14738/aivp.31.1000

Keywords:

human motion recognition, kinematic human model, Yogasana, Kinect.

Abstract

Human action recognition has tremendous applications in interdisciplinary domain and it's challenges kept researchers busy worldwide. This gave rise to different representation and recognition methods. Posture can either be represented by shape features or skeleton features. We have represented action using sequence of postures and skeleton features are used for posture representation. Proposed work recognizes Yogasana from real time video. Yogasana is a type of exercise, in which specific sequence of the postures needs to be performed. Regular practice of it shows tremendous benefits in physiological and psychological disorders. System uses skeleton data of twenty human joints for representation of posture. Asanas are represented by kinematic state model using skeleton data provided by Kinect sensor. System is tested on 120 real time video sequences captured on four different subjects performing three asnas. Our system gave 96% recognition rate.

Author Biography

Geetanjali Vinayak Kale, Pune Inst

Computer Engineering

References

. J.K. Aggarwal and M.S. Ryoo, "Human activity analysis: a review", ACM Computing Surveys, vol. 43, No. 3, Article 16, 2011, pp. 16:1–16:43

. T. B. Moeslund et al., “A survey of advances in vision-based human motion capture and analysis”, Computer Vision and Image Understanding 104, 2006, pp. 90–126.

. R. Poppe, ”Vision-based human motion analysis: An overview”, Computer Vision and Image Understanding 108, 2007, pp 4–18.

. P. Turaga et.al, “Machine Recognition of Human Activities: A Survey”, IEEE Trans. on Circuits and Systems for video technology, vol. 18, no. 11, Nov. 2008, pp. 1473-1488.

. J. Han et al., “Enhanced Computer Vision with Microsoft Kinect Sensor: A Review “ , IEEE trans. on Cybernatics, Vol. 43 , no. 5, 2013, pp 1318-1343.

. G. V. Kale, V. H. Patil, " ", Proceedings of Third Post Graduate Conference on “Computer Engineering“ cPGCON 2014, Vol X, pp. .

. S. Nowozin and J. Shotton, “Action points: A representation for low-latency online human action recognition,” Microsoft Research Cambridge,

Tech. Rep. MSR-TR-2012-68, 2012.

. M. Yang et al., “Human Action Recognition Based on Kinect”, Journal of Computational Information Systems 10: 12, 2014, pp 5347–5354.

. X Yang and Y Tian, “Effective 3D action recognition using EigenJoints”, Journal of Visual Communication and Image Recognition, 25 2014, pp 2–11.

. J. Sung et al., "Human Activity Detection from RGBD Images", AAAI Workshop on Pattern, Activity and Intent Recognition, July 2011.

. Zan Gao et.al, “Human Action Recognition Via Multi-modality Information“, J Electr Eng Technol Vol. 9, No. 2, 2013, pp 739-748.

. Sudheer Deshpande, Nagendra H R, Raghuram Nagarathna, “A randomized control trial of the effect of yoga on Gunas (personality) and Health in normal healthy volunteers” International Journal of Yoga Vol.

:1, Jan-Jun-2008, pages:2-10

. K. Williamsa et.al.,“Effect of Iyengar yoga therapy for chronic low

back pain”, International Association for the Study of Pain, 115 (2005) 107–117.

. A. Kauts and N. Sharma, “Effect of yoga on academic performance in relation to stress,” International Journal of Yoga Vol. 2:1 Jan-Jun-2009 Pages:39-43.

. B. K. S. Iyengar, "The Illustrated Light on Yoga", HarperCollins Publisher India, Tenth impression, published in 2005.

Downloads

Published

2015-02-28

How to Cite

Kale, G. V., & Patil, V. H. (2015). Real Time Human Action Recognition using Kinematic State Model. European Journal of Applied Sciences, 3(1), 17. https://doi.org/10.14738/aivp.31.1000