GSI is a Max patch for designing gestures to perform time-varying synthesised sound.
GSI extends the notion of mapping-by-demonstration in a practical setting by enabling users to capture gesture while listening to sound, and then to train different machine learning models.
It associates the authoring of gesture to interactive sound synthesis and in so doing, explores the connection between sound design and gesture design.
The technique uses commonly available tools for musical performance and machine learning and assumes no specialist knowledge of machine learning. It will be useful for artists wishing to create gestures
for interactive music performances in which gestural input articulates dynamic synthesised sound where the association of gesture and sound is not made by direct mapping, but mediated by machine learning.
GSI uses an automated techniques for training Neural Network and XMM models.