Two-stage single-channel audio source separation using deep neural networks

Emad M Grais, Gerard Roma, Andrew JR Simpson, Mark D Plumbley

Research output: Contribution to journalArticle

22 Citations (Scopus)

Abstract

Most single channel audio source separation approaches produce separated sources accompanied by interference from other sources and other distortions. To tackle this problem, we propose to separate the sources in two stages. In the first stage, the sources are separated from the mixed signal. In the second stage, the interference between the separated sources and the distortions are reduced using deep neural networks (DNNs). We propose two methods that use DNNs to improve the quality of the separated sources in the second stage. In the first method, each separated source is improved individually using its own trained DNN, while in the second method all the separated sources are improved together using a single DNN. To further improve the quality of the separated sources, the DNNs in the second stage are trained discriminatively to further decrease the interference and the distortions of the separated sources. Our experimental results show that using two stages of separation improves the quality of the separated signals by decreasing the interference between the separated sources and distortions compared to separating the sources using a single stage of separation.
Original languageEnglish
Pages (from-to)1773-1783
Number of pages11
JournalIEEE/ACM Transactions on Audio, Speech, and Language Processing
Volume25
Issue number9
DOIs
Publication statusPublished - 1 Sep 2017
Externally publishedYes

Fingerprint Dive into the research topics of 'Two-stage single-channel audio source separation using deep neural networks'. Together they form a unique fingerprint.

  • Cite this