Activities per year
Abstract
Descriptor spaces have become an ubiquitous interaction paradigm for music based on collections of audio samples. However, most systems rely on a small predefined set of descriptors, which the user is often required to understand
and choose from. There is no guarantee that the chosen descriptors are relevant for a given collection. In addition,this method does not scale to longer samples that require higher-dimensional descriptions, which biases systems towards the use of short samples. In this paper we propose novel framework for automatic creation of interactive sound spaces from sound collections using feature learning and dimensionality reduction. The framework is implemented as a
software library using the SuperCollider language. We compare several algorithms and describe some example interfaces for interacting with the resulting spaces. Our experiments signal the potential of unsupervised algorithms for creating data-driven musical interfaces.
and choose from. There is no guarantee that the chosen descriptors are relevant for a given collection. In addition,this method does not scale to longer samples that require higher-dimensional descriptions, which biases systems towards the use of short samples. In this paper we propose novel framework for automatic creation of interactive sound spaces from sound collections using feature learning and dimensionality reduction. The framework is implemented as a
software library using the SuperCollider language. We compare several algorithms and describe some example interfaces for interacting with the resulting spaces. Our experiments signal the potential of unsupervised algorithms for creating data-driven musical interfaces.
Original language | English |
---|---|
Title of host publication | Proceedings of the International Conference on New Interfaces for Musical Expression |
Editors | Marcelo Queiroz, Anna Xambó Sedó |
Place of Publication | Porto Alegre |
Pages | 313-318 |
Number of pages | 6 |
Publication status | Published - Jun 2019 |
Event | The International Conference on New Interfaces for Musical Expression - Porto Alegre, Brazil Duration: 3 Jun 2019 → 6 Jun 2019 https://www.nime.org/ |
Publication series
Name | Proceedings of the conference on New Interface for Musical Expression (NIME) |
---|---|
ISSN (Print) | 2220-4806 |
Conference
Conference | The International Conference on New Interfaces for Musical Expression |
---|---|
Abbreviated title | NIME2019 |
Country | Brazil |
City | Porto Alegre |
Period | 3/06/19 → 6/06/19 |
Internet address |
Fingerprint Dive into the research topics of 'Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces'. Together they form a unique fingerprint.
Activities
- 2 Invited talk
-
The FluCoMa Project: Empowering techno-fluent sound artists
Pierre Alexandre Tremblay (Speaker)
15 Feb 2020Activity: Talk or presentation types › Invited talk
-
Fluid Corpus Manipulation: blurring taxonomies through creative convergences of practices
Pierre Alexandre Tremblay (Speaker)
2 May 2019Activity: Talk or presentation types › Invited talk
Projects
- 1 Active
-
FluCoMa: Fluid Corpus Manipulations
Tremblay, P. A., Green, O., Roma, G., Harker, A., Clarke, M. & Dufeu, F.
1/09/17 → 31/08/22
Project: Research