User-independent Accelerometer Gesture Recognition for Participatory Mobile Music

Gerard Roma, Anna Xambó, Jason Freeman

Research output: Contribution to journalArticle

Abstract

With the widespread use of smartphones that have multiple sensors and sound processing capabilities, there is a great potential for increased audience participation in music performances. This paper proposes a framework for participatory mobile music based on mapping arbitrary accelerometer gestures to sound synthesizers. The authors describe Handwaving, a system based on neural networks for real-time gesture recognition and sonification on mobile browsers. Based on a multiuser dataset, results show that training with data from multiple users improves classification accuracy, supporting the use of the proposed algorithm for user-independent gesture recognition. This illustrates the relevance of user-independent training for multiuser settings, especially in participatory music. The system is implemented using web standards, which makes it simple and quick to deploy software on audience devices in live performance settings.
Original languageEnglish
Pages (from-to)430-438
Number of pages9
JournalAES: Journal of the Audio Engineering Society
Volume66
Issue number6
DOIs
Publication statusPublished - 18 Jun 2018

Fingerprint

Gesture recognition
Accelerometers
Acoustic waves
Smartphones
Neural networks
Sensors
Processing
Music
Gesture
Sound

Cite this

@article{92f7293a4bd04256a5534c9d5fd93fad,
title = "User-independent Accelerometer Gesture Recognition for Participatory Mobile Music",
abstract = "With the widespread use of smartphones that have multiple sensors and sound processing capabilities, there is a great potential for increased audience participation in music performances. This paper proposes a framework for participatory mobile music based on mapping arbitrary accelerometer gestures to sound synthesizers. The authors describe Handwaving, a system based on neural networks for real-time gesture recognition and sonification on mobile browsers. Based on a multiuser dataset, results show that training with data from multiple users improves classification accuracy, supporting the use of the proposed algorithm for user-independent gesture recognition. This illustrates the relevance of user-independent training for multiuser settings, especially in participatory music. The system is implemented using web standards, which makes it simple and quick to deploy software on audience devices in live performance settings.",
author = "Gerard Roma and Anna Xamb{\'o} and Jason Freeman",
year = "2018",
month = "6",
day = "18",
doi = "10.17743/jaes.2018.0026",
language = "English",
volume = "66",
pages = "430--438",
journal = "AES: Journal of the Audio Engineering Society",
issn = "0004-7554",
publisher = "Audio Engineering Society",
number = "6",

}

User-independent Accelerometer Gesture Recognition for Participatory Mobile Music. / Roma, Gerard; Xambó, Anna; Freeman, Jason.

In: AES: Journal of the Audio Engineering Society, Vol. 66, No. 6, 18.06.2018, p. 430-438.

Research output: Contribution to journalArticle

TY - JOUR

T1 - User-independent Accelerometer Gesture Recognition for Participatory Mobile Music

AU - Roma, Gerard

AU - Xambó, Anna

AU - Freeman, Jason

PY - 2018/6/18

Y1 - 2018/6/18

N2 - With the widespread use of smartphones that have multiple sensors and sound processing capabilities, there is a great potential for increased audience participation in music performances. This paper proposes a framework for participatory mobile music based on mapping arbitrary accelerometer gestures to sound synthesizers. The authors describe Handwaving, a system based on neural networks for real-time gesture recognition and sonification on mobile browsers. Based on a multiuser dataset, results show that training with data from multiple users improves classification accuracy, supporting the use of the proposed algorithm for user-independent gesture recognition. This illustrates the relevance of user-independent training for multiuser settings, especially in participatory music. The system is implemented using web standards, which makes it simple and quick to deploy software on audience devices in live performance settings.

AB - With the widespread use of smartphones that have multiple sensors and sound processing capabilities, there is a great potential for increased audience participation in music performances. This paper proposes a framework for participatory mobile music based on mapping arbitrary accelerometer gestures to sound synthesizers. The authors describe Handwaving, a system based on neural networks for real-time gesture recognition and sonification on mobile browsers. Based on a multiuser dataset, results show that training with data from multiple users improves classification accuracy, supporting the use of the proposed algorithm for user-independent gesture recognition. This illustrates the relevance of user-independent training for multiuser settings, especially in participatory music. The system is implemented using web standards, which makes it simple and quick to deploy software on audience devices in live performance settings.

U2 - 10.17743/jaes.2018.0026

DO - 10.17743/jaes.2018.0026

M3 - Article

VL - 66

SP - 430

EP - 438

JO - AES: Journal of the Audio Engineering Society

JF - AES: Journal of the Audio Engineering Society

SN - 0004-7554

IS - 6

ER -