Activities per year
Project Details
Description
The FluCoMA project instigates new musical ways of exploiting ever-growing banks of sound and gestures within the digital composition process by bringing breakthroughs of signal decomposition DSP to the toolset of techno-fluent computer composers for the first time.
Cutting-edge musical composition has always been dependent on, critical and subversive of the latest advances of technology. Unfortunately, there is a contemporary challenge inherent to aesthetic research in computer composition: an ever-expanding gap between DSP advances and their availability to musical investigators.
One such advance is signal decomposition: a sound can now be separated into its transient, pitched, and residual constituents. These potent algorithms are partially available in closed software, or in laboratories, but not at a suitable level of modularity within the coding environments used by the creative researchers (Max and SuperCollider) to allow groundbreaking sonic research into a rich unexploited area: the manipulation of large sound corpora. Indeed, with access to, genesis of, and storage of large sound banks now commonplace, novel ways of abstracting and manipulating them are needed to mine their inherent potential.
FluCoMa proposes to tackle this issue by bridging this gap, empowering techno-fluent aesthetic researchers with a toolset for signal decomposition within their mastered software environments, in order to experiment with new sound and gesture design untapped in large corpora. The three degrees of manipulations to be explored are (1) expressive browsing and descriptor-based taxonomy, (2) remixing, component replacement, and hybridisation by concatenation, and (3) pattern recognition at component level, with interpolating and variation making potential. These novel manipulations will yield new sounds, new musical ideas, and new approaches to large corpora. At present, no library exists allowing such cutting-edge research on creative fluid corpus manipulations to be done.
Cutting-edge musical composition has always been dependent on, critical and subversive of the latest advances of technology. Unfortunately, there is a contemporary challenge inherent to aesthetic research in computer composition: an ever-expanding gap between DSP advances and their availability to musical investigators.
One such advance is signal decomposition: a sound can now be separated into its transient, pitched, and residual constituents. These potent algorithms are partially available in closed software, or in laboratories, but not at a suitable level of modularity within the coding environments used by the creative researchers (Max and SuperCollider) to allow groundbreaking sonic research into a rich unexploited area: the manipulation of large sound corpora. Indeed, with access to, genesis of, and storage of large sound banks now commonplace, novel ways of abstracting and manipulating them are needed to mine their inherent potential.
FluCoMa proposes to tackle this issue by bridging this gap, empowering techno-fluent aesthetic researchers with a toolset for signal decomposition within their mastered software environments, in order to experiment with new sound and gesture design untapped in large corpora. The three degrees of manipulations to be explored are (1) expressive browsing and descriptor-based taxonomy, (2) remixing, component replacement, and hybridisation by concatenation, and (3) pattern recognition at component level, with interpolating and variation making potential. These novel manipulations will yield new sounds, new musical ideas, and new approaches to large corpora. At present, no library exists allowing such cutting-edge research on creative fluid corpus manipulations to be done.
Acronym | FluCoMa |
---|---|
Status | Finished |
Effective start/end date | 1/09/17 → 28/02/23 |
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.
Activities
- 7 Invited talk
-
Beyond cohabitation: Towards a rich interdisciplinarity in music-technology research
Pierre Alexandre Tremblay (Speaker)
24 May 2022Activity: Talk or presentation types › Invited talk
-
PA Tremblay: sandbox#n: playing the laptop, the bass, the studio
Pierre Alexandre Tremblay (Speaker)
13 Jul 2020Activity: Talk or presentation types › Invited talk
-
A Beautiful Mess: Tales of in-between-ness in SMC research
Pierre Alexandre Tremblay (Speaker)
26 Jun 2020Activity: Talk or presentation types › Invited talk
Datasets
-
Performance Cartography, Performance Cartology: Multimedia Appendices
Hart, J. (Creator), Dufeu, F. (Supervisor), Green, O. (Supervisor) & Tremblay, P. A. (Supervisor), University of Huddersfield, 1 Oct 2021
DOI: https://doi.org/10.34696/p0y3-7f65, https://huddersfield.box.com/v/JacobHartThesisAppendices
Dataset
-
Cafe Oto, London: Performing Critical AI I: Improvisation with/against Machine Learners
Green, O. & Tremblay, P. A., 27 Nov 2022Research output: Non-textual form › Performance
Open Access -
Enabling Programmatic Data Mining as Musicking: The Fluid Corpus Manipulation Toolkit
Tremblay, P. A., Roma, G. & Green, O., 1 Jul 2022, In: Computer Music Journal. 45, 2, p. 9-23 15 p.Research output: Contribution to journal › Article › peer-review
Open Access10 Citations (Scopus) -
Fluid Corpus Manipulation Toolbox
Tremblay, P. A., Green, O., Roma, G., Bradbury, J., Moore, T., Hart, J. & Harker, A., 7 Jul 2022Research output: Non-textual form › Software
Open Access