Publication Type : Journal Article
Publisher : IEEE Access
Source : IEEE Access, Volume 7, p.81883-81902 (2019)
Url : https://ieeexplore.ieee.org/document/8736730/footnotes#footnotes
Campus : Bengaluru
School : Department of Computer Science and Engineering, School of Engineering
Department : Computer Science, Electronics and Communication
Year : 2019
Abstract : Expressive speech can be synthesized using acoustic feature modeling by mapping the spectral and fundamental frequency parameters between neutral speech and target emotions based on context. Speaker and text-independent emotion conversion are challenging modeling problems in this paradigm. In this paper, spectral mapping using an i-vector-based framework of fixed dimensions is proposed for the speaker-independent emotion conversion, considering the entire problem in the utterance domain, rather than the existing approaches using frame-level processing. The high dimensionality of i-vectors and reduced utterances for i-vector training necessitate the use of Probabilistic Linear Discriminant Analysis (PLDA) to derive the emotion dependent latent vector. The i-vector setup does not require parallel data or alignment procedures at any stage of training. F-0 training is conducted on a multilayer feed-forward neural network using limited aligned seed parallel data. The framework is tested on three different languages (datasets) viz. German (EmoDB), Telugu (IITKGP), and English (SAVEE). The proposed approach delivered superior performance compared to the baseline under both clean and noisy data conditions considered for analysis. Under clean data conditions, the proposed model was found to perform better than the baseline with a Mel Cepstral Distortion as low as 3.8 (fear), an F-0-RMSE of 26.31 (happiness), and a Perceptual Evaluation of Speech Quality (PESQ) of 3.64 (anger) across datasets. Subjective testing provided a maximum CMOS of 4.10 (anger), 4.44 (fear), and 3.43 (happiness). © 2013 IEEE.
Cite this Research Publication : Susmitha Vekkot, Gupta, D., Zakariah, M., and Alotaibi, Y. Ajami, “Hybrid Framework for Speaker-Independent Emotion Conversion Using i-Vector PLDA and Neural Network”, IEEE Access, vol. 7, pp. 81883-81902, 2019.