Publication Type : Conference Paper
Publisher : IEEE
Source : 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), 2020
Url : https://ieeexplore.ieee.org/abstract/document/9214103
Campus : Amritapuri
School : School of Engineering
Department : Electronics and Communication
Year : 2020
Abstract : The advancements in the field of deep learning and emotion recognition has been increasing in the recent past. The work presents a model framework that understands the emotion depicted on the face and from the voice. The primary goal of this work remains to improve human-computer cooperation. Hood frontal face images and various voice cuts are provided by the model system. From the Image database FER2013, 25838 samples were used for training and 90 samples of Amrita Emote database (ADB) for testing. The speech database consists of four different datasets, with a total of 20,000 examples. 3/4 of this information is used to prepare and 1/4 of the information used is for testing. Intermittent neural networks (RNNs) and traditional neural networks (CNNs) are nervous system-based projects that use speech and image management to control emotions: pleasure, sadness, anger, hatred, surprise, and fear.
Cite this Research Publication : R. Chinmayi, N. Sreeja, A. S. Nair, M. K. Jayakumar, R. Gowri and A. Jaiswal, "Emotion Classification Using Deep Learning," 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 2020, pp. 1063-1068, doi: 10.1109/ICSSIT48917.2020.9214103.