Considering movement-based interaction beyond the mouse-keyboard paradigm, The ANR project ELEMENT (2018-2021) proposes to shift the focus from intuitiveness/naturalness towards learnability: new interaction paradigms might require users to develop specific sensorimotor skills compatible with – and transferable between, – digital interfaces (including video interface, mobile, internet of things, game interfaces). With learnable embodied interactions, novice users should be able to approach a new system with a difficulty adapted to their expertise, then the system should be able to carefully adapt to the improving motor skills, and eventually enable complex, expressive and engaging interactions.
Our project addresses both methodological and modelling issues. First, we need to elaborate methods to design learnable movement vocabularies, which units are easy to learn and be composed to create richer and more expressive movement phrases. Since movement vocabularies proposed by novice users are often idiosyncratic with limited expressive power, we propose to capitalize on knowledge and experience of movement experts such as dancers and musicians. For example, dance practitioners commonly use the notion of movement qualities (i.e. describing “how” a movement is performed [Fdili Alaoui et al., 2012]) which can be key to describe movements, as well as methods to memorize choreographic phrases such as marking (i.e. performing a movement sequence with simplified gestures). Second, we need to conceive computational models able to analyze users’ movements in real-time to provide various multimodal feedback and guidance mechanisms (e.g. visual and auditory feedback). Importantly, the movement models must take into account the user’s expertise and learning development. We argue that computational movement models able to adapt to user-specific learning pathways is key to facilitate the acquisition of motor skills.
Research questions and aims
We propose to address three main research questions:
- How to design body movement as input modality, whose components are easy to learn, but that allow for complex/rich interaction techniques that go beyond simple commands?
- What computational movement modelling can account for sensorimotor adaptation and/or learning in embodied interaction?
- How to optimize model-driven feedback and guidance to facilitate skill acquisition in embodied interaction?