Jose-Miguel FERNANDEZ had his thesis defence in Composition Music on the 15th of September at 2.30 PM:
Towards a unified system of interaction and synchronisation in electroacoustic and mixed composition: centralised electronic scores
His thesis is via YouTube link: https://youtu.be/QKUPBiXt1M0
His jury will be:
Bonardi Alain, Maître de conférence, Laboratoire Esthétique, musicologie, dance et création musicale Univ. Paris 8
Trubert Jean-françois, Professeur des universités, CENTRE TRANSDISCIPLINAIRE D’ÉPISTÉMOLOGIE DE LA LITTÉRATURE ET DES ARTS VIVANTS Université de Nice
Manoury Philippe, Compositeur - examinator
Spiropoulos Georgia, Compositrice - examinator
Orlarey Yann, Compositeur et directeur scientifique du GRAME, Centre National de Création Musicale - examinator
Giavitto Jean-Louis, directeur de recherche CNRS (STMS - CNRS, Ircam, Sorbonne Université, Ministère de la culture) - thesis director
Couprie Pierre, Professeur des universités, Centre d’histoire culturelle des sociétés contemporaines Univ. Paris Saclay- thesis director
With the advent of computer technology and its integration into the world of musical composition, new avenues for compositional and sound research have opened up. But if we have been witnessing for years a plethora of sound generators and synthesis techniques, there are few proposals for tools that address the formal construction (at composition time) and the real-time control (at performance time) of electronic music at several levels with the objective to achieve a fine integration between the instrumental writing and the interactive electronic processes.
The artistic challenge is the specification and the implementation of interactions between performer's interpretive freedom on stage and the sound processes in real time, relying on efficient captation devices and synchronisation mechanisms. The centralisation and coordination of these interactions within a single score aims at a fine integration of dynamic and generative electronic processes with different temporal media.
The development of the corresponding computer tools must make it possible, during the interpretation of a musical or scenic work, to carry out the complex temporal relationships expressed in the score, by controlling in real time the flows of events interconnecting the performers, the electronic part, the public, the scenic devices and the systems of production, diffusion and transformation of sound.
Based on new and more expressive dedicated programming languages for writing electronics - such as Antescofo - and on high performance synthesis and signal processing systems - such as SuperCollider -, this work has led to the development of a dedicated library: AntesCollider. This library allows experimenting with new approaches for writing electronics through the multi-temporal and multi-scale organization of sound, and their interaction structures.
By leveraging computational notions of agents, processes and real-time algorithms, these sound structures can be dynamically and polyphonically combined in diect relationship to external events (motion capture, interconnection with other media), opening up new compositional paradigms and bringing new freedom and plasticity to musical making.