Back close

Sign Language Recognition: A Comparative Analysis of Deep Learning Models

Publication Type : Book Chapter

Publisher : SpringerNature

Source : Lecture Notes in Networks and Systems

Url : https://link.springer.com/chapter/10.1007/978-981-16-6723-7_1

Campus : Amritapuri

School : School of Computing

Center : Computational Linguistics and Indic Studies

Year : 2022

Abstract : Sign language is the primary means of communication used by deaf and dumb people. Learning this language could be perplexing for humans; therefore, it is critical to develop a system that can accurately detect sign language. The fields of deep learning and computer vision with recent advances are used to make an impact in sign language recognition with a fully automated deep learning architecture. This paper presents two models built using two deep learning algorithms; VGG-16 and convolutional neural network (CNN) for recognition and classification of hand gestures. The project aims at analysing the models’ performance quantitatively by optimising accuracy obtained using limited dataset. It aims at designing a system that recognises the hand gestures of American sign language and detects the alphabets. Both the models gave excellent results, VGG-16 being the better. VGG-16 model delivered an accuracy of 99.56% followed by CNN with an accuracy of 99.38%.

Cite this Research Publication : Aswathi Premkumar, R. Hridya Krishna, Nikita Chanalya, C. Meghadev, Utkrist Arvind Varma, T. Anjali & S. Siji Rani, Sign Language Recognition: A Comparative Analysis of Deep Learning Models. Inventive Computation and Information Technologies. Lecture Notes in Networks and Systems, vol 336. Springer, Singapore.

Admissions Apply Now