LES MERCREDIS DE STMS: Rodrigo CADIZ

"Creativity in generative musical neural networks"

  • Research
  • Seminars

Les Mercredis de STMS and the Ircam - CNRS - Sorbonne Université - Ministère de la Culture STMS  Laboratory's Musical Representations team invite you to meet Rodrigo Cádiz, who will present "Creativity in generative musical neural networks".

All audiences are welcome to attend at Ircam, but you can also follow the presentation live by joining us at https://www.youtube.com/watch?v=PfpLjOsurSE

The presentation will be in English.

Abstract:
Deep learning, one of the fastest-growing branches of artificial intelligence, has become one of the most relevant research and development areas of the last years, especially since 2012, when a neural network surpassed the most advanced image classification techniques of the time. This spectacular development has not been alien to the world of the arts, as recent advances in generative networks have made possible the artificial creation of high-quality content such as images, movies or music. We believe that these novel generative models propose a great challenge to our current understanding of computational creativity. If a robot can now create music that an expert cannot distinguish from music composed by a human, or create novel musical entities that were not known at training time, or exhibit conceptual leaps, does it mean that the machine is then creative? We believe that the emergence of these generative models clearly signals that much more research needs to be done in this area. We would like to contribute to this debate with two case studies of our own: TimbreNet, a variational auto-encoder network trained to generate audio-based musical chords, and StyleGAN Pianorolls, a generative adversarial network capable of creating short musical excerpts, despite the fact that it was trained with images and not musical data. We discuss and assess these generative models in terms of their creativity and we show that they are in practice capable of learning musical concepts that are not obvious based on the training data, and we hypothesize that these deep models, based on our current understanding of creativity in robots and machines, can be considered, in fact, creative.

Biography:
Rodrigo F. Cádiz is a composer, researcher and engineer. He studied composition and electrical engineering at the Pontificia Universidad Católica de Chile (UC) in Santiago and he obtained his Ph.D. in Music Technology from Northwestern University. His compositions, consisting of approximately 60 works, have been presented at several venues and festivals around the world. His catalogue considers works for solo instruments, chamber music, symphonic and robot orchestras, visual music, computers, and new interfaces for musical expression. He has received several composition prizes and artistic grants both in Chile and the US. He has authored around 60 scientific publications in peer reviewed journals and international conferences. His areas of expertise include sonification, sound synthesis, audio digital processing, computer music, composition, new interfaces for musical expression and the musical applications of complex systems. He has obtained research funds from Chilean governmental agencies, such as ANID and CNCA. He received a Google Latin American Research Award (LARA) in the field of auditory graphs. In 2018, Rodrigo was a composer in residence with the Stanford Laptop orchestra (SLOrk) at the Center for Computer-based Research in Music and Acoustics (CCRMA), and a Tinker Visiting Professor at the Center for Latin American Studies, Stanford University. In 2019, he received the Prize for Excellence in Artistic Creation at UC. He was the chair of the 2021 edition of the International Computer Music Conference. He is currently a professor at both the Music Institute and Electrical Engineering Department at UC.

TimbrePlay

En poursuivant votre navigation sur ce site, vous acceptez l'utilisation de cookies pour nous permettre de mesurer l'audience, et pour vous permettre de partager du contenu via les boutons de partage de réseaux sociaux. En savoir plus.