CURVELET TRANSFORM AND HMM CLASSIFIER BASED SIGN LANGUAGE RECOGNITION SYSTEM

  • Suresh Anand M
  • Mohan Kumar N
Keywords: Curvelet Transform, Sign Language, Entropy Features, HMM Classifier

Abstract

A communication tool in the form of sign language is required for deaf and dump persons as there is no oral communication possible between them. They perform the conversion of sign languages into voice/text. Recently, many algorithms are developed for this purpose. An Indian Sign Language Recognition (ISLR) system is presented in this paper. It uses curvelet transform based entropy features for the recognition, and the transform is applied only to the segmented hand region. Then, the features of each sign of English alphabets are modelled by a classier network called Hidden Markov Models (HMM). The system gives an average accuracy of 82.95% using 3rd level features which can help to reduce the communication gap between deaf-dumb and normal people in the world.

References

E.A. Kalsh and N.S. Garewal, “Sign Language Recognition system”, International journal of computational engineering research, Vol. 3, No. 6, 2013, pp. 16-21.

T. Starner and A. Pentland, “Real-time American Sign Language recognition from video using hidden Markov models”, In Motion-Based Recognition, Springer Netherlands, 1997, pp. 227-243.

Y. Li, X. Chen, X. Zhang, K. Wang and Z.J. Wang, “A sign-component-based framework for Chinese sign language recognition using accelerometer and sEMG data”, IEEE transactions on biomedical engineering, Vol. 59, No. 10, 2012, pp. 2695-2704.

C.M. Capek, D. Waters, B. Woll, M. MacSweeney, M.J. Brammer, P.K. McGuire and R. Campbell, “Hand and mouth: Cortical correlates of lexical processing in British Sign Language and speech reading English”, Journal of Cognitive Neuroscience, Vol. 20, No. 7, 2008, pp. 1220-1234.

N. Tubaiz, T. Shanableh and K. Assaleh, “Glove-based continuous Arabic sign language recognition in user dependent mode”, IEEE Transactions on Human-Machine Systems, Vol. 45, No. 4, 2015, pp. 526-533.

J. Han, G. Awad, and A. Sutherland, “Boosted subunits: a framework for recognising sign language from videos”, IET Image Processing, Vol. 7, No. 1, 2013, pp. 70-80.

D. Kelly, J. Mc Donald, and C. Markham, “Weakly supervised training of a sign language recognition system using multiple instance learning density matrices”, IEEE Transactions on Systems, Man, and Cybernetics, Vol. 41, No. 2, 2011, pp. 526-541.

P.C. Badhe, and V. Kulkarni, “Indian sign language translator using gesture recognition algorithm”, IEEE International Conference on Computer Graphics, Vision and Information Security, 2015, pp. 195-200.

D. Barberis, N. Garazzino, P. Prinetto, G. Tiotto, A. Savino, U. Shoaib, and N. Ahmad, “Language resources for computer assisted translation from Italian to Italian sign language of deaf people”, In Proceedings of Accessibility Reaching Everywhere AEGIS Workshop and International Conference, Brussels, 2011, Belgium.

L.C. Wang, R. Wang, D.H. Kong, and B.C. Yin, “Similarity assessment model for Chinese sign language videos”, IEEE Transactions on Multimedia, Vol. 16, No. 3, 2014, pp. 751-761.

P.S. Rajam, and G. Balakrishnan, “Real time Indian sign language recognition system to aid deaf-dumb people”, IEEE 13th International Conference on Communication Technology, 2011, pp. 737-742.

Y. Zhou, X. Chen, D. Zhao, H. Yao, and W. Gao, “Adaptive sign language recognition with Exemplar extraction and MAP/IVFS”, IEEE signal processing letters, Vol. 17, No. 3, 2010, pp. 297-300.

E. Candes, L. Demanet, D. Donoho and L. Ying, "Fast discrete curvelet transforms". Multiscale Modeling & Simulation, Vol. 5, No. 3, 2006, pp. 861-899.
Published
2017-06-20
Section
Articles