Publication Type : Journal Article
Source : Eurasip Journal of Audio Speech and Music Processing 49, (2023), , Impact factor: 2.4, SJR: 0.458, Indexing: SCIE
Url : https://asmp-eurasipjournals.springeropen.com/articles/10.1186/s13636-023-00316-4
Campus : Coimbatore
School : School of Artificial Intelligence
Center : Center for Computational Engineering and Networking
Year : 2023
Abstract : Predominant source separation is the separation of one or more desired predominant signals, such as voice or leading instruments, from polyphonic music. The proposed work uses time-frequency filtering on predominant source separation and conditional adversarial networks to improve the perceived quality of isolated sounds. The pitch tracks corresponding to the prominent sound sources of the polyphonic music are estimated using a predominant pitch extraction algorithm and a binary mask corresponding to each pitch track and its harmonics are generated. Time-frequency filtering is performed on the spectrogram of the input signal using a binary mask that isolates the dominant sources based on pitch. The perceptual quality of source-separated music signal is enhanced using a CycleGAN-based conditional adversarial network operating on spectrogram images. The proposed work is systematically evaluated using the IRMAS and ADC 2004 datasets. Subjective and objective evaluations have been carried out. The reconstructed spectrogram is converted back to music signals by applying the inverse short-time Fourier transform. The intelligibility of separated audio is enhanced using an intelligibility enhancement module based on an audio style transfer scheme. The performance of the proposed method is compared with state-of-the-art Demucs and Wave-U-Net architectures and shows competing performance both objectively and subjectively.
Cite this Research Publication : Reghunath,L.C.,Rajan,R., ”Predominant Audio Source Separation”, Eurasip Journal of Audio Speech and Music Processing 49, (2023), , Impact factor: 2.4, SJR: 0.458, Indexing: SCIE