Abstract
This paper presents the development of a method of perceptually optimising the acoustics and reverb of a virtual space. A spatial filtering technique was developed to group artificially rendered reflections by what spatial attribute they contribute to e.g. apparent source width, distance, loudness, colouration etc. The current system alters the level of different reflection groups depending on the desired type of optimisation. It is hoped that in the future this system could be coupled with machine learning techniques, such that it is able to determine the initial perceptual qualities of the artificial reverb, then optimise the acoustics depending on the user’s needs. Such a system could ultimately be used to universally identify what spatial qualities are good and bad, then generically optimise the acoustics automatically.
Original language | English |
---|---|
Title of host publication | Proceeding of the 4th Workshop on Intelligent Music Production |
Number of pages | 4 |
Publication status | Published - 10 Sep 2018 |