Skip to content
Snippets Groups Projects
Commit af7e6f59 authored by balandinodidonato's avatar balandinodidonato
Browse files

readme edits

parent 1c064a42
Branches
No related merge requests found
......@@ -25,53 +25,12 @@ MacOS 10.14.5 and Windows 10
3. Open the the Max project *gesture_sound_interaction_2.0* in the folder *continuous-gesture-sound-interaction/gesture_sound_interaction_2.0*
### Design gestures for continuous sonic interaction
1. Play the sound from the bigger play button in the Sound Design pannel. This sound can be processed using the simple synthesiser that can be found in the *gesture_sound_interaction_2.0/patchers* folder.
The synthesiser is a variable-rate looping sample playback synth. It allows to control speed of the playback, the pitch shift, and the section of the audio file to loop.
Synth parameters are sent from GSI streams data the synthesiser via [Open Sound control (OSC)](http://opensoundcontrol.org).
You can also use your own synthesiser, as far as it can receive OSC messages (see [How to use your own synthesiser](https://gitlab.doc.gold.ac.uk/biomusic/continuous-gesture-sound-interaction/wikis/How-To#use-your-own-synthesiser)).
You can select salient synthesis parameters, here called *break points*, that will be modulated in the temporal evolution of the sound.
These can be set by clicking the small play buttons and then moving the dials or through the number boxes. For example, if you select the break point number 1, then you can edit synth parameters relative to that breakpoints through the envelopes dials or number boxes.
The four break points then define variation of a sound which length can be edited from the *Duration* box above the status bar. To audition the sound, click on the big play button next to the status bar.
Envelope parameters can be stored into a *.jason* file clicking on the button *Save Current*, or deleted with the button *Delete Current*. The *.jason* file can be then exported (*Export* button) and reloaded in the next session through the *Load* button, to then recall the different presets with the *Preset* numberbox.
![-soundDesign](uploads/945652f30b6a21013c46cafb73a63c7c/-soundDesign.png)
1. Play the sound from the bigger play button in the [Sound Design](https://gitlab.doc.gold.ac.uk/biomusic/continuous-gesture-sound-interaction/wikis/Sound-design) pannel.
2. Select one of the first three presets
3. Choose one of those and perform the sound tracing exercise.
4. Once the you want to record your sound trace, then press the record gesture button in the Gesture Design pannel. It allows recording gestures associated to time-varying sound trajectory through a sound-tracing exercise, or hand poses associated with breakpoints in the sound, here called *anchor points*.
To record a gesture, click the record gesture button in the Gesture Design pannel.
To record static poses associated with each of the anchor points, select first the break point and then click on the related anchor point record button.
![-gestureDesignml](uploads/2f46b2a995d440e73f2c2d96e86e5080/-gestureDesignml.png)
5. Later, find the poses related to the four anchor-points and record the by selecting the anchor point and then press the related anchorpoint related button. These aproaches are Whole Regression, Windowed Regression, XMM and Static Regression
### Whole regression
The whole regression approach trains a Neural Network using gestural data generated while performing a gesture during the whole duration of the sound. We call this algorithm Whole Regression.
![whole_gesture](uploads/ed8f2d2deb9cfa88be75c89765f8edcb/whole_gesture.png)
### Windowed regression
This approach trains a neural network with gestural data and synthesis parameters from four temporal windows centred around the four fixed anchor points in the sound.
Anchor points are defined as points in time where there is a breakpoint in the functions that generate synthesis parameters over time (red circles in Figure below).
This includes the beginning and end of the sound, as well as two equally spaced intermediate points.
Training data are recorded during windows that are and centred around the anchor points and have a size of 1/6 of the whole duration of given sound (grey areas in below). We call this Windowed Regression.
![windowed_regression](uploads/ea257fd62e839b85f0ea4a3473054277/windowed_regression.png)
### XMM
### Static Regression
After designing the sound-gesture interaction through the sound tracing exercise, users segment their gestural performance into four discrete poses, or anchor points. These points coincide with breakpoints in
the synthesis parameters (see Windowed Regression). Training data are recorded by pairing sensor data from static poses with fixed synthesis parameters.
These data are used to train a regression model, so in performance participants can explore a continuous mapping between the defined training points.
We refer to this technique as Static Regression.
![static_regression](uploads/a4a3ae7cc1afdba4df37d156005d9f7b/static_regression.png)
6. Now you can choose one of the machine learning approaches and explore the sound with the same or new gestures.
4. Once the you want to record your sound trace, then press the record gesture button in the [Gesture Design](https://gitlab.doc.gold.ac.uk/biomusic/continuous-gesture-sound-interaction/wikis/Gesture-design) pannel.
5. Later, find the poses related to the four anchor-points and record the by selecting the anchor point and then press the related anchorpoint related button.
6. Now you can choose one of the [machine learning approaches](https://gitlab.doc.gold.ac.uk/biomusic/continuous-gesture-sound-interaction/wikis/Machine-Learning) and explore the sound with the same or new gestures.
### Use your own synthesiser
In the case you want to use the GSI with a different synth you set your synth to receive the OSC messages set through the *OSC Communication* GUI pannel. Messages are listed in the [OSC communication page](https://gitlab.doc.gold.ac.uk/biomusic/continuous-gesture-sound-interaction/wikis/OSC-communication).
......
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment