A Comparison of Human against Machine-Classification of Spatial Audio Scenes in Binaural Recordings of Music

Sławomir Zieliński, Hyunkook Lee, Paweł Antoniuk, Oskar Dadan

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)

Abstract

The purpose of this paper is to compare the performance of human listeners against the selected machine learning algorithms in the task of the classification of spatial audio scenes in binaural recordings of music under practical conditions. The three scenes were subject to classification: (1) music ensemble (a group of musical sources) located in the front, (2) music ensemble located at the back, and (3) music ensemble distributed around a listener. In the listening test, undertaken remotely over the Internet, human listeners reached the classification accuracy of 42.5%. For the listeners who passed the post-screening test, the accuracy was greater, approaching 60%. The above classification task was also undertaken automatically using four machine learning algorithms: convolutional neural network, support vector machines, extreme gradient boosting framework, and logistic regression. The machine learning algorithms substantially outperformed human listeners, with the classification accuracy reaching 84%, when tested under the binaural-room-impulse-response (BRIR) matched conditions. However, when the algorithms were tested under the BRIR mismatched scenario, the accuracy obtained by the algorithms was comparable to that exhibited by the listeners who passed the post-screening test, implying that the machine learning algorithms capability to perform in unknown electro-acoustic conditions needs to be further improved.
Original languageEnglish
Article number5956
Number of pages24
JournalApplied Sciences (Switzerland)
Volume10
Issue number17
Early online date28 Aug 2020
DOIs
Publication statusPublished - 1 Sep 2020

Fingerprint

Dive into the research topics of 'A Comparison of Human against Machine-Classification of Spatial Audio Scenes in Binaural Recordings of Music'. Together they form a unique fingerprint.

Cite this