PirouNet, a deep learning model for intentional dance


Pre-Print: arxiv.org/abs/2207.12126

Code: github.com/bioshape-lab/pirounet

PirouNet is a semi-supervised variational autoencoder with long-short-term memory capable of conditionally generating dance sequences based on a small amount of choreographer-provided aesthetic annotations.

How do humans and computers perceive motion? We prioritize the subjective experience of the body, while computers encode objective motion equation. Can we reconcile them? Meet PirouNet, an artificial intelligence creating dance from choreographers' artistic inputs!

Abstract: Using Artificial Intelligence (AI) to create dance choreography with intention is still at an early stage. Methods that conditionally generate dance sequences remain limited in their ability to follow choreographer-specific creative direction, often relying on external prompts or supervised learning. In the same vein, fully annotated dance datasets are rare and labor intensive. To fill this gap and help leverage deep learning as a meaningful tool for choreographers, we propose "PirouNet", a semi-supervised conditional recurrent variational autoencoder together with a dance labeling web application. PirouNet allows dance professionals to annotate data with their own subjective creative labels and subsequently generate new bouts of choreography based on their aesthetic criteria. Thanks to the proposed semi-supervised approach, PirouNet only requires a small portion of the dataset to be labeled, typically on the order of 1%. We demonstrate PirouNet's capabilities as it generates original choreography based on the "Laban Time Effort", an established dance notion describing intention for a movement's time dynamics. We extensively evaluate PirouNet's dance creations through a series of qualitative and quantitative metrics, validating its applicability as a tool for choreographers.