Back close

Sign Language Video to Text Conversion viaOptimised LSTM with Improved Motion Estimation

Publication Type : Journal Article

Publisher : Journal of Experimental & Theoretical ArtificialIntelligence

Source : Journal of Experimental & Theoretical ArtificialIntelligence 37 (1): 163–82

Url : https://www.tandfonline.com/doi/full/10.1080/0952813X.2024.2380991

Campus : Coimbatore

School : School of Physical Sciences

Department : Mathematics

Year : 2024

Abstract : A method of communication for the deaf and the dumb is sign language (SL). Knowing SL allows speakers and listeners to converse effectively with deaf and dumb people. Yet, while they can acquire sign language (SL) to communicate with the dumb and the deaf, untrained persons are unable to speak with them. For such individuals, an SL to text system is more beneficial in enabling them to communicate more effectively with everyday people. To interact with the deaf and dumb, SL is a tactile gesture that employs the hands and eyes. This proposes a new model for translating SL video to text conversion or generation. Frame Conversion is the initial step, where the given input video is converted into frames. Subsequently, features considered are hierarchy of skeleton and Improved Motion estimation (ME). For generating the text, this work uses an optimised LSTM model, where training of weights is done by hybrid Combined Pelican and BES algorithm (CP-BES). Moreover, the mean value for dataset ISL-CSLTR has achieved the proposed method is 27.3%, 31.6%, 28.7%, 33.4%, 29.8%, 24.5%, 36.2%, and 34.8% greater than the conventional schemes like CNN, FSU-CNN, ASL – CNN, RNN, LSTM +POA, LSTM +ARCHOA, LSTM +HGS, LSTM +PRO and LSTM +BES.

Cite this Research Publication : S, Subburaj, Murugavalli S, and Muthusenthil B. “Sign Language Video to Text Conversion viaOptimised LSTM with Improved Motion Estimation.” Journal of Experimental & Theoretical ArtificialIntelligence 37 (1): 163–82. doi:10.1080/0952813X.2024.2380991

Admissions Apply Now