Dancer and choreographer Marie Bruand and Sarah Nabi, a doctoral student at the STMS laboratory (Ircam, Sorbonne University, CNRS, Ministry of Culture) have devised Prélude, a performance that they invite you to discover at the Nuit Blanche in Paris in 6 sessions of 30 minutes each.
What if the body became a musical instrument and its spirit, the musician? Prélude is a dance and music performance in which you can watch a dancer generate music through her movements. Immersed in a contemporary world of sound, we can see how the dancer tames this musical body, rediscovering the close link between dance and music. Prélude is an opportunity to question our own relationship with these two arts. Listening to movement and watching music becomes possible.
This performance is the result of a close collaboration between art and science. Using artificial intelligence and technologies developed by the STMS laboratory, the dancer's body becomes an instrument of sound creation.
This performance uses several of the laboratory's technologies to control the model and generate new sounds in real time using the dynamics of the dancer's movements: the "RAVE" (Real-time Audio Variational autoEncoder) deep neural network sound synthesis model by Antoine CAILLON in the laboratory's ACIDS-Représentations Musicales and Analyse et Synthèse des Sons research teams, the real-time motion analysis systems (MuBu), from the Interaction Son Musique Mouvement (ISMM) team and the "R-IoT" motion sensors, from the Pôle Ingénierie et Prototype (PIP).
Marie Bruand is a contemporary dancer and choreographer. In her work, she explores and develops her subject matter around the cross-disciplinary nature of dance and the other arts. Inspired by William Forsythe, Anne Teresa de Keersmaeker and Laban's methods of composition, Marie builds her choreographies around the sensations of the body, flow and listening. Trained in classical, contemporary and hip-hop dance, she draws on these aesthetics to create a fully-fledged fusion in her movements.
Sarah Nabi is a doctoral student in her first year at the UMR9912 STMS laboratory (Ircam, Sorbonne University, CNRS, Ministry of Culture) - call for DIM AI4IDF projects, in the Sound Analysis and Synthesis and Sound Music Movement Interaction (ISMM) teams, and at LTCI - Télécom Paris in the ADASP group, under the supervision of Philippe Esling, Frédéric Bevilacqua and Geoffroy Peeters. This artistic collaboration is part of his thesis entitled "Learning adapted representations for gestural control of audio synthesis using deep generative models". The aim is to propose new control methods adapted to creative uses, making it easy to customise the parameters of sound synthesis using deep neural networks, and to use movement dynamics to interact with these models, with a view to turning the body into an instrument that can be customised according to artistic intent.