Back close

Sign Language Recognition: A Comparative Analysis of Deep Learning Models

Publication Type : Book Chapter

Publisher : Springer Nature Singapore

Source : Inventive Computation and Information Technologies: Proceedings of ICICIT 2021

Url : https://link.springer.com/chapter/10.1007/978-981-16-6723-7_1

Campus : Amritapuri

School : School of Computing

Year : 2022

Abstract : Sign language is the primary means of communication used by deaf and dumb people. Learning this language could be perplexing for humans; therefore, it is critical to develop a system that can accurately detect sign language. The fields of deep learning and computer vision with recent advances are used to make an impact in sign language recognition with a fully automated deep learning architecture. This paper presents two models built using two deep learning algorithms; VGG-16 and convolutional neural network (CNN) for recognition and classification of hand gestures. The project aims at analysing the models’ performance quantitatively by optimising accuracy obtained using limited dataset. It aims at designing a system that recognises the hand gestures of American sign language and detects the alphabets. Both the models gave excellent results, VGG-16 being the better. VGG-16 model delivered an accuracy of 99.56% followed by CNN with an accuracy of 99.38%.

Cite this Research Publication : Premkumar, Aswathi, R. Hridya Krishna, Nikita Chanalya, C. Meghadev, Utkrist Arvind Varma, T. Anjali, and S. Siji Rani. "Sign language recognition: A comparative analysis of deep learning models." In Inventive Computation and Information Technologies: Proceedings of ICICIT 2021, pp. 1-13. Singapore: Springer Nature Singapore, 2022.

Admissions Apply Now