Back close

A vision based dynamic gesture recognition of Indian Sign Language on Kinect based depth images

Publication Type : Conference Paper

Publisher : Emerging Trends in Communication, Control, Signal Processing Computing Applications (C2SPCA), 2013 International Conference on

Source : Emerging Trends in Communication, Control, Signal Processing Computing Applications (C2SPCA), 2013 International Conference on, 2013

Url : http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6749448(link is external)

Keywords : axis of least inertia, Dynamic Gestures, Indian sign language, Microsoft Kinect, Principal component analysis

Campus : Amritapuri

School : Department of Computer Science and Engineering, School of Engineering

Department : Computer Science

Verified : Yes

Year : 2013

Abstract : Indian Sign Language (ISL) is a visual-spatial language which provides linguistic information using hands, arms, facial expressions, and head/body postures. Our proposed work aims at recognizing 3D dynamic signs corresponding to ISL words. With the advent of 3D sensors like Microsoft Kinect Cameras, 3D geometric processing of images has received much attention in recent researches. We have captured 3D dynamic gestures of ISL words using Kinect camera and has proposed a novel method for feature extraction of dynamic gestures of ISL words. While languages like the American sign language(ASL) are of huge popularity in the field of research and development, Indian Sign Language on the other hand has been standardized recently and hence its (ISLs) recognition is less explored. The method extracts features from the signs and convert it to the intended textual form. The proposed method integrates both local as well as global information of the dynamic sign. A new trajectory based feature extraction method using the concept of Axis of Least Inertia (ALI) is proposed for global feature extraction. An eigen distance based method using the seven 3D key points- (five corresponding to each finger tips, one corresponding to centre of the palm and another corresponding to lower part of palm), extracted using Kinect is proposed for local feature extraction. Integrating 3D local feature has improved the performance of the system as shown in the result. Apart from serving as an aid to the disabled people, other applications of the system also include serving as a sign language tutor, interpreter and also be of use in electronic systems that take gesture input from the users.

Cite this Research Publication :
M. Geetha, C, M., P, U., and R, H., “A vision based dynamic gesture recognition of Indian Sign Language on Kinect based depth images”, in Emerging Trends in Communication, Control, Signal Processing Computing Applications (C2SPCA), 2013 International Conference on, 2013

Admissions Apply Now