Abstract
Descriptor spaces have become an ubiquitous interaction paradigm for music based on collections of audio samples. However, most systems rely on a small predefined set of descriptors, which the user is often required to understand
and choose from. There is no guarantee that the chosen descriptors are relevant for a given collection. In addition,this method does not scale to longer samples that require higher-dimensional descriptions, which biases systems towards the use of short samples. In this paper we propose novel framework for automatic creation of interactive sound spaces from sound collections using feature learning and dimensionality reduction. The framework is implemented as a
software library using the SuperCollider language. We compare several algorithms and describe some example interfaces for interacting with the resulting spaces. Our experiments signal the potential of unsupervised algorithms for creating data-driven musical interfaces.
and choose from. There is no guarantee that the chosen descriptors are relevant for a given collection. In addition,this method does not scale to longer samples that require higher-dimensional descriptions, which biases systems towards the use of short samples. In this paper we propose novel framework for automatic creation of interactive sound spaces from sound collections using feature learning and dimensionality reduction. The framework is implemented as a
software library using the SuperCollider language. We compare several algorithms and describe some example interfaces for interacting with the resulting spaces. Our experiments signal the potential of unsupervised algorithms for creating data-driven musical interfaces.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the International Conference on New Interfaces for Musical Expression |
| Editors | Marcelo Queiroz, Anna Xambó Sedó |
| Place of Publication | Porto Alegre |
| Pages | 313-318 |
| Number of pages | 6 |
| Publication status | Published - Jun 2019 |
| Event | 19th International conference on New Interfaces for Musical Expression - Porto Alegre, Brazil Duration: 3 Jun 2019 → 6 Jun 2019 https://www.nime.org/ |
Publication series
| Name | Proceedings of the conference on New Interface for Musical Expression (NIME) |
|---|---|
| ISSN (Print) | 2220-4806 |
Conference
| Conference | 19th International conference on New Interfaces for Musical Expression |
|---|---|
| Abbreviated title | NIME2019 |
| Country/Territory | Brazil |
| City | Porto Alegre |
| Period | 3/06/19 → 6/06/19 |
| Internet address |
Fingerprint
Dive into the research topics of 'Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces'. Together they form a unique fingerprint.Research output
- 16 Citations
- 1 Conference contribution
-
Interdisciplinary Research as Musical Experimentation: A case study in musicianly approaches to sound corpora
Green, O., Tremblay, P. A. & Roma, G., 24 Jan 2019, Proceedings of the Electroacoustic Music Studies Network Conference . Zenodo, 12 p.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › peer-review
Open Access
Activities
- 2 Invited talk
-
The FluCoMa Project: Empowering techno-fluent sound artists
Tremblay, P. A. (Speaker)
15 Feb 2020Activity: Talk or presentation types › Invited talk
-
Fluid Corpus Manipulation: blurring taxonomies through creative convergences of practices
Tremblay, P. A. (Speaker)
2 May 2019Activity: Talk or presentation types › Invited talk
Projects
- 1 Finished
-
FluCoMa: Fluid Corpus Manipulations
Tremblay, P. A. (PI), Green, O. (CoI), Roma, G. (CoI), Harker, A. (CoI), Clarke, M. (CoI) & Dufeu, F. (CoI)
1/09/17 → 28/02/23
Project: Research
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver