Models, Analysis and Execution of Audio Graphs in Interactive Multimedia Systems
Interactive Multimedia Systems (IMS) are used in concert for interactive performances, which combine in real time acoustic instruments, electronic instruments, data from various sensors (gestures, midi interface, etc.) and the control of different media (video, light, etc.).
This thesis presents a formal model of audio graphs, via a type system and a denotational semantics, with multirate timestamped bufferized data streams that make it possible to represent with more or less precision the interleaving of the control (for example a low frequency oscillator, velocities from an accelerometer) and
audio processing. An audio extension of Antescofo, an IMS that acts as a partition tracker and includes a dedicated synchronous timed language, has motivated the development of this model. This extension makes it possible to connect Faust effects and native effects on the fly safely. The approach has been validated on a mixed music piece and an example of audio and video interaction.
Finally, this thesis proposes offline optimizations based on the automatic resampling of parts of the audio graph to be executed. A quality and execution time model in the graph has been defined. Its experimental study was carried out using a prototype IMS based on the automatic generation of audio graphs, which has also made it possible to characterize resampling strategies proposed for the online case in real time.