Single-Channel Audio Source Separation Using Deep Neural Network Ensembles

Emad M Grais, Gerard Roma, Andrew JR Simpson, Mark D Plumbley

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

24 Citations (Scopus)

Abstract

Deep neural networks (DNNs) are often used to tackle the single channel source separation (SCSS) problem by predicting time-frequency masks. The predicted masks are then used to separate the sources from the mixed signal. Different types of masks produce separated sources with different levels of distortion and interference. Some types of masks produce separated sources with low distortion, while other masks produce low interference between the separated sources. In this paper a combination of different DNNs’ predictions (masks) is used for SCSS to achieve better quality of the separated sources than using each DNN individually. We train four different DNNs by minimizing four different cost functions to predict four different masks. The first and second DNNs are trained to approximate reference binary and soft masks. The third DNN is trained to predict a mask from the reference sources directly. The last DNN is trained similarly to the third DNN but with an additional discriminative constraint to maximize the differences between the estimated sources. Our experimental results show that combining the predictions of different DNNs achieves separated sources with better quality than using each DNN individually.
Original languageEnglish
Title of host publicationAudio Engineering Society Convention 140
PublisherAudio Engineering Society
Publication statusPublished - 26 May 2016
Externally publishedYes
Event140th Audio Engineering Society Convention 2016 - Palais des Congrès, Paris, France
Duration: 4 Jun 20167 Jun 2016
Conference number: 140
http://www.aes.org/events/140/ (Link to Conference Website)

Conference

Conference140th Audio Engineering Society Convention 2016
Country/TerritoryFrance
CityParis
Period4/06/167/06/16
Internet address

Fingerprint

Dive into the research topics of 'Single-Channel Audio Source Separation Using Deep Neural Network Ensembles'. Together they form a unique fingerprint.
  • Improving single-network single-channel separation of musical audio with convolutional layers

    Roma, G., Green, O. & Tremblay, P. A., 6 Jun 2018, Latent Variable Analysis and Signal Separation: 14th International Conference, LVA/ICA 2018, Guildford, UK, July 2–5, 2018, Proceedings. Gannot, S., Deville, Y., Mason, R., Plumbley, M. D. & Ward, D. (eds.). Springer Verlag, p. 306-315 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); vol. 10891 LNCS).

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    Open Access
    File
    6 Citations (Scopus)
  • Combining mask estimates for single channel audio source separation using deep neural networks

    Grais, E. M., Roma, G., Simpson, A. JR. & Plumbley, M. D., Sep 2016, Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. p. 3339-3343 5 p.

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    Open Access
    22 Citations (Scopus)

Cite this