Les Mercredis de STMS welcomes Giorgia Cantisani, a researcher at the CNRS attached to the Sound Music Movement Interaction team. She will present her work in English entitled "Investigating the shared neural processing of music and speech with data-driven modeling of brain data."
A Zoom link for remote connection
https://us06web.zoom.us/j/83720019933?pwd=DHJA77HySnKklsyywWycOx2Wq0pOb3.1
Meeting ID: 837 2001 9933-Password: 519707
Abstract:
In this presentation, I will give an overview of my work on music perception, with a particular focus on how the brain processes music compared to speech in naturalistic scenarios. I will discuss how data-driven modeling can be used in this context to link continuous, complex sounds to multivariate neural activity, complementing more traditional paradigms that rely on discrete, controlled stimuli. Such modelling allows to probe underlying cog-neural processes that are otherwise hard to access as they operate on the natural unfolding of musical and linguistic structures over time (e.g., predictive mechanisms) and are modulated by complex internal states (e.g., attention). Thanks to this framework, we could probe signatures of shared and distinct neural processing of music and speech, as well as how factors such as attention, structure, and context shape their representations in the brain.
Biography:
Giorgia Cantisani is a CNRS researcher working at the intersection of auditory neuroscience and machine learning. She earned her PhD at Télécom Paris, where she worked on brain data decoding for music-related BCIs. Since then, she has been working at École Normale Supérieure and now at IRCAM, investigating how the brain processes complex sounds such as music and speech and how the two interact in song.