Journal Screenshot

International Journal of Academic Research in Business and Social Sciences

Open Access Journal

ISSN: 2222-6990

Translating Hand Gestures Using 3D Convolutional Neural Network

Farah Yasmin Abdul Rahman, Amirul Asyraf Kamaruzzaman, Shahrani Shahbudin, Roslina Mohamad, Nor Surayahani Suriani, Saiful Izwan Suliman

http://dx.doi.org/10.6007/IJARBSS/v12-i6/13989

Open access

Hand gestures are one of the mediums that many people use to communicate with each other. The use of gesture recognition applications has become increasingly popular in recent years especially in computer vision areas. Typically, gestures can easily be recognized from a single image frame (i.e. alphabet from sign language), however the ability to recognize complex gestures with subtle differences between movement requires more works and larger datasets. In this work, we introduce a simple gesture recognition system that translates 5 different hand gestures, namely “doing other things”, “swiping down”, “swiping left”, “zooming out with two fingers” and “drumming fingers”. We used datasets obtained from Jester dataset. The inputs were processed in ‘RGB’ format during the pre-processing phase and a spatiotemporal filter were used as a feature extraction method, which were also the main building block in this system. Next, we trained the features using 3D Convolution Neural Network (3D-CNN). Further, we used real-time video to test the developed recognition system with 5 different actors. Findings show that the developed model can translate hand gestures with accuracy of 85.70% and 0.4% losses.

Ahuja, R., Jain, D., Sachdeva, D., Garg, A., & Rajput, C. (2019). Convolutional Neural Network Based American Sign Language Static Hand Gesture Recognition. International Journal of Ambient Computing and Intelligence, 60-73.
Bao, P. T., Binh, N. T., & Khoa, T. D. (2009). A New Approach to Hand Tracking and Gesture Recognition by a New Feature Type and HMM. Sixth International Conference on Fuzzy Systems and Knowledge Discovery (pp. 3-6). 2009: IEEE.
Chang, C.-C., Chen, I.-Y., & Huang, Y.-S. (2002). Hand pose recognition using curvature scale space. Object recognition supported by user interaction for service robots. 2, pp. 386-389. Quebec City, Quebec, Canada: IEEE.
Zhi, D. T. E. (2018). Teaching a Robot Sign Language using Vision-Based Hand Gesture Recognition. 2018 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA) (pp. 1-6). Ottawa, ON: IEEE.
Haba, C.-G., Breniuc, L., Ciobanu, R., & Tudosa, I. (2018). Development of a wireless glove based on RFID Sensor. International Conference on Applied and Theoretical Electricity ICATE 2018. Craiova, Romania.
Bhaskaran, A. G. K. A. (2017). Smart gloves for hand gesture recognition: Sign language to speech conversion system. International Conference on Robotics and Automation for Humanitarian Applications (RAHA). Kollam, India: IEEE.
Kale, K., Pawar, S., & Dhulekar, P. (2015). Moving object tracking using optical flow and motion vector estimation. 4th International Conference on Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions) (pp. 1-6). Noida: IEEE.
Li, H., Lin, Z., Shen, X., & Brandt, J. (2015). A convolutional neural network cascade for face detection. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5325-5334). IEEE.
Mahmood, M. R. A. M. (2018). Dynamic Hand Gesture Recognition System for Kurdish Sign Language Using Two Lines of Features. International Conference on Advanced Science and Engineering (ICOASE) (pp. 42-47). Duhok: IEEE.
Materzynska, J., Berger, G., Bax, I., & Memisevic, R. (2019). The Jester Dataset: A Large-Scale Video Dataset of Human Gestures. IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) (pp. 2874-2882). Seoul, Korea (South): IEEE.
Meng, H., Pears, N., Freeman, M., & Bailey, C. (2009). Motion History Histograms for Human Action Recognition.
Pardasani, A., Sharma, A. K., Banerjee, S., Garg, V., & Roy, D. S. (2018). Enhancing the Ability to Communicate by Synthesizing American Sign Language using Image Recognition in A Chatbot for Differently Abled. 7th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO) (pp. 529-532). Noida, India: IEEE.
Raheja, J., Chandra, M., & Chaudhary, A. (2017). 3D Gesture based Real-time Object Selection and Recognition. Pattern Recognition Letters.
Ravi, S., Suman, M., Kishore, P., Eepuri, K., Maddala, T., & Kumar, A. D. (2019). Multi Modal Spatio Temporal Co-Trained CNNs with Single Modal Testing on RGB – D based Sign Language Gesture Recognition. Journal of Computer Languages, 52, 88–102.
Sepehri, A., Yacoob, Y., & Larry, D. (2006). Employing the Hand as an Interface Device. Journal of Multimedia, 1, 18-29.
Sharp, T., Wei, Y., Freedman, D., Kohli, P., Krupka, E., Fitzgibbon, A., Vinnikov, A. (2015). Accurate, robust, and flexible realtime hand tracking. The 33rd Annual ACM Conference (pp. 3633-3642). ACM.
Smedt, Q., Wannous, H., Vandeborre, J., Guerry, J., & Le Saux, B. (2017). 3D hand gesture recognition using a depth and skeletal dataset: SHREC'17 track. 3Dor '17: Proceedings of the Workshop on 3D Object Retrieval (pp. 33 - 38). ACM.
Stack Exchange. (2019). Data Science. Retrieved Sept 1st, 2020, from Understanding Training and Test Loss Plot: :
https://datascience.stackexchange.com/questions/52028/understanding-training-and-test-loss-plots
Tkach, A., Pauly, M., & Tagliasacchi, A. (2016). Sphere-meshes for real-time hand modeling and tracking. ACM Transactions on Graphics, 35(6).
Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2015). Learning Spatiotemporal Features with 3D Convolutional Networks. IEEE International Conference on Computer Vision (ICCV) (pp. 4489-4497). Santiago: IEEE.
Tsai, C.-Y., & Lee, Y.-H. (2011). The parameters effect on performance in ANN for hand gesture recognition system. Expert Syst. Appl., 7980-7983.
Utsumi, A., Miyasato, T., & Kishino, F. (1995). Multi-camera hand pose recognition system using skeleton image. 4th IEEE International Workshop on Robot and Human Communication (pp. 219-224). Tokyo, Japan: IEEE.
Wilk, M. P., Torres-Sanchez, J., Tedesco, S., & O'Flynn, B. (2018). Wearable Human Computer Interface for Control Within Immersive VAMR Gaming Environments Using Data Glove and Hand Gestures. IEEE Games, Entertainment, Media Conference (GEM). Galway, Ireland.
Zhao, S., Tan, W., Wu, C., Liu, C., & Wen, S. (2009). A novel interactive method of virtual reality system based on hand gesture recognition. Chinese Control and Decision Conference (pp. 5879-5882). Guilin: 2009.

In-Text Citation: (Abdul Rahman et al., 2022)
To Cite this Article: Abdul Rahman, F. Y., Kamaruzzaman, A. A., Shahbudin, S., Mohamad, R., Suriani, N. S., and Suliman, S. I. (2022). Translating Hand Gestures Using 3D Convolutional Neural Network. International Journal of Academic Research in Business and Social Sciences. 12(6), 533 – 542.