|
|
GSI is a Max patch that enable to design a sound with any synthesiser, and to then reproduce and explore that sound through gestural interaction.
|
|
|
GSI is a Max patch for designing gestures to perform time-varying synthesised sound.
|
|
|
|
|
|
This software was develop to conduct research on designing gestures for continuous sonic interaction ([Tanaka et al. 2019](http://www.nime.org/proceedings/2019/nime2019_036.pdf)). |
|
|
\ No newline at end of file |
|
|
GSI extends the notion of mapping-by-demonstration in a practical setting by enabling users to capture gesture while listening to sound, and then to train different machine learning models.
|
|
|
It associates the authoring of gesture to interactive sound synthesis and in so doing, explores the connection between sound design and gesture design.
|
|
|
The technique uses commonly available tools for musical performance and machine learning and assumes no specialist knowledge of machine learning. It will be useful for artists wishing to create gestures
|
|
|
for interactive music performances in which gestural input articulates dynamic synthesised sound where the association of gesture and sound is not made by direct mapping, but mediated by machine learning.
|
|
|
|
|
|
GSI uses an automated techniques for training Neural Network and XMM models.
|
|
|
|
|
|
|
|
|
Related publication: [Tanaka, Di Donato, Zbyszynski, Roks (2019), Designing Gestures for Continuous Sonic Interaction, Proceedings of the International Conference on New Interfaces for Musical Expression, Porto Alegre, Brazil, pp. 180-185](http://www.nime.org/proceedings/2019/nime2019_036.pdf). |
|
|
\ No newline at end of file |