Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Descriptor spaces have become an ubiquitous interaction paradigm for music based on collections of audio samples. However, most systems rely on a small predefined set of descriptors, which the user is often required to understand
and choose from. There is no guarantee that the chosen descriptors are relevant for a given collection. In addition,this method does not scale to longer samples that require higher-dimensional descriptions, which biases systems towards the use of short samples. In this paper we propose novel framework for automatic creation of interactive sound spaces from sound collections using feature learning and dimensionality reduction. The framework is implemented as a
software library using the SuperCollider language. We compare several algorithms and describe some example interfaces for interacting with the resulting spaces. Our experiments signal the potential of unsupervised algorithms for creating data-driven musical interfaces.
LanguageEnglish
Title of host publicationProceedings of the International Conference on New Interfaces for Musical Expression
EditorsMarcelo Queiroz, Anna Xambó Sedó
Place of PublicationPorto Alegre
Pages313-318
Number of pages6
Publication statusPublished - Jun 2019
EventThe International Conference on New Interfaces for Musical Expression - Porto Alegre, Brazil
Duration: 3 Jun 20196 Jun 2019
https://www.nime.org/

Publication series

NameProceedings of the conference on New Interface for Musical Expression (NIME)
ISSN (Print)2220-4806

Conference

ConferenceThe International Conference on New Interfaces for Musical Expression
Abbreviated titleNIME
CountryBrazil
CityPorto Alegre
Period3/06/196/06/19
Internet address

Fingerprint

Acoustic waves
Experiments

Cite this

Roma, G., Green, O., & Tremblay, P. A. (2019). Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces. In M. Queiroz, & A. X. Sedó (Eds.), Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 313-318). (Proceedings of the conference on New Interface for Musical Expression (NIME)). Porto Alegre.
Roma, Gerard ; Green, Owen ; Tremblay, Pierre Alexandre. / Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression. editor / Marcelo Queiroz ; Anna Xambó Sedó. Porto Alegre, 2019. pp. 313-318 (Proceedings of the conference on New Interface for Musical Expression (NIME)).
@inproceedings{2305974dc3144c26b60d5b83d51cd1a3,
title = "Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces",
abstract = "Descriptor spaces have become an ubiquitous interaction paradigm for music based on collections of audio samples. However, most systems rely on a small predefined set of descriptors, which the user is often required to understandand choose from. There is no guarantee that the chosen descriptors are relevant for a given collection. In addition,this method does not scale to longer samples that require higher-dimensional descriptions, which biases systems towards the use of short samples. In this paper we propose novel framework for automatic creation of interactive sound spaces from sound collections using feature learning and dimensionality reduction. The framework is implemented as asoftware library using the SuperCollider language. We compare several algorithms and describe some example interfaces for interacting with the resulting spaces. Our experiments signal the potential of unsupervised algorithms for creating data-driven musical interfaces.",
keywords = "Dimensionality reduction, feature learning, information visualisation",
author = "Gerard Roma and Owen Green and Tremblay, {Pierre Alexandre}",
year = "2019",
month = "6",
language = "English",
series = "Proceedings of the conference on New Interface for Musical Expression (NIME)",
pages = "313--318",
editor = "Marcelo Queiroz and Sed{\'o}, {Anna Xamb{\'o}}",
booktitle = "Proceedings of the International Conference on New Interfaces for Musical Expression",

}

Roma, G, Green, O & Tremblay, PA 2019, Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces. in M Queiroz & AX Sedó (eds), Proceedings of the International Conference on New Interfaces for Musical Expression. Proceedings of the conference on New Interface for Musical Expression (NIME), Porto Alegre, pp. 313-318, The International Conference on New Interfaces for Musical Expression, Porto Alegre, Brazil, 3/06/19.

Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces. / Roma, Gerard; Green, Owen; Tremblay, Pierre Alexandre.

Proceedings of the International Conference on New Interfaces for Musical Expression. ed. / Marcelo Queiroz; Anna Xambó Sedó. Porto Alegre, 2019. p. 313-318 (Proceedings of the conference on New Interface for Musical Expression (NIME)).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces

AU - Roma, Gerard

AU - Green, Owen

AU - Tremblay, Pierre Alexandre

PY - 2019/6

Y1 - 2019/6

N2 - Descriptor spaces have become an ubiquitous interaction paradigm for music based on collections of audio samples. However, most systems rely on a small predefined set of descriptors, which the user is often required to understandand choose from. There is no guarantee that the chosen descriptors are relevant for a given collection. In addition,this method does not scale to longer samples that require higher-dimensional descriptions, which biases systems towards the use of short samples. In this paper we propose novel framework for automatic creation of interactive sound spaces from sound collections using feature learning and dimensionality reduction. The framework is implemented as asoftware library using the SuperCollider language. We compare several algorithms and describe some example interfaces for interacting with the resulting spaces. Our experiments signal the potential of unsupervised algorithms for creating data-driven musical interfaces.

AB - Descriptor spaces have become an ubiquitous interaction paradigm for music based on collections of audio samples. However, most systems rely on a small predefined set of descriptors, which the user is often required to understandand choose from. There is no guarantee that the chosen descriptors are relevant for a given collection. In addition,this method does not scale to longer samples that require higher-dimensional descriptions, which biases systems towards the use of short samples. In this paper we propose novel framework for automatic creation of interactive sound spaces from sound collections using feature learning and dimensionality reduction. The framework is implemented as asoftware library using the SuperCollider language. We compare several algorithms and describe some example interfaces for interacting with the resulting spaces. Our experiments signal the potential of unsupervised algorithms for creating data-driven musical interfaces.

KW - Dimensionality reduction

KW - feature learning

KW - information visualisation

UR - https://www.nime.org/

M3 - Conference contribution

T3 - Proceedings of the conference on New Interface for Musical Expression (NIME)

SP - 313

EP - 318

BT - Proceedings of the International Conference on New Interfaces for Musical Expression

A2 - Queiroz, Marcelo

A2 - Sedó, Anna Xambó

CY - Porto Alegre

ER -

Roma G, Green O, Tremblay PA. Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces. In Queiroz M, Sedó AX, editors, Proceedings of the International Conference on New Interfaces for Musical Expression. Porto Alegre. 2019. p. 313-318. (Proceedings of the conference on New Interface for Musical Expression (NIME)).