Remixing musical audio on the web using source separation

Gerard Roma, Andrew JR Simpson, Emad M Grais, Mark D Plumbley

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


Research in audio source separation has progressed a long way, producing systems that are able to approximate the component signals of sound mixtures. In recent years, many efforts have focused on learning time-frequency masks that can be used to filter a monophonic signal in the frequency domain. Using current web audio technologies, time-frequency masking can be implemented in a web browser in real time. This allows applying source separation techniques to arbitrary audio streams, such as internet radios, depending on cross-domain security configurations. While producing good quality separated audio from monophonic music mixtures is still challenging, current methods can be applied to remixing scenarios, where part of the signal is emphasized or deemphasized. This paper describes a system for remixing musical audio on the web by applying time-frequency masks estimated using deep neural networks. Our example prototype, implemented in client-side Javascript, provides reasonable quality results for small modifications.
Original languageEnglish
Title of host publicationProceedings of the 2nd Web Audio Conference (WAC 2016)
Number of pages4
ISBN (Electronic)9780692619735
Publication statusPublished - Apr 2016
Externally publishedYes
Event2nd Web Audio Conference - Atlanta, United States
Duration: 4 Apr 20166 Apr 2016
Conference number: 2


Conference2nd Web Audio Conference
Abbreviated titleWAC-2016
Country/TerritoryUnited States


Dive into the research topics of 'Remixing musical audio on the web using source separation'. Together they form a unique fingerprint.

Cite this