top of page
wyh [已恢复]-05.png

please wait for this video loading....

What if everyone can dance? 

What if the data of Choreography can affect how people interact and perceive in environments?

About this project

Since 2017, I have devoted a lot of  time in dancing. Dance is not only an aggregation of body languages, but also an abstract form of people's space relationship in environments. It consistes of people's space occupation and individuals‘ space relationships with another individual. By analyzing and training the data model of Latin dance‘s choreography, I hope to apply this relationship to non-dancers relationships with environments and affect their way of interaction with others.

KEYWORDS:

computational media Machine

Interactive data visualization

Motion capture

Graduation project

Instructor : Fan Xiang

Date : Dec 2020 - Nov 2021 

ezgif-7-f3f9578a44d5.gif

Awards:

-Graduate 360° -100 graduation works of the year

-Specifific prize of Data Visualization, 2021‘think youth’ -Shanghai international digital creation, Innovation& entrepreneurship competition New media art section

-2021 Young Battle :Nominated best graduation project

ezgif-3-d8d332fde4c2.gif
再度引发.png

Key Features

Latin dance

      Data source

machine learning

Analysis method

interactive media

Visualization

ezgif-7-5ad86cc7cbe4.gif
ezgif-3-43f42f0a103b.gif
ezgif-3-d56c3d3cd5dd.gif

Background & Inspiration

Problems

In my dance experience, I realize that dance performance is always a one-way communication process - from the dancer to the audience.

Audiences are not really involved in the art of dance themselves.

截屏2021-12-02 下午6.52.54.png

What to do

Based on the collection and analysis of the motion data of dance.

I created an interactive dance system that can set the non-dancer audience as a predominance when they are involved with the interaction.

ezgif-6-9b9a4cd430bf.gif
再度引发.png

Logic chain

I have built a logic chain  that can be iterated through the whole creative process.In this logic chain, the data can be increased, and the model can be integrated. 

截屏2021-12-02 下午10.25.03.png
logic chain
截屏2021-12-02 下午10.26.46.png

这一项目还在进行当中,你可以在这一网页随时看到它的最新进度。

Data collection

In this session, I used optical motion capture to collect the dynamic position data of the GBD.  This data adopted a total of three dance style including rumba, samba, and cha-cha. A dataset covering a large number of dance movements and body relationships as well as basic dynamic rhythms in GBL Latin dance was then formed.

 

I found that the essence of dance interaction is actually the collision and extrusion of individual spatial relationships. 

Data collection
7931615027710_.pic_hd.jpg
IMG_4150.jpg
ezgif-3-b46ef002c7c3.gif
截屏2021-12-03 上午12.22.04.png
5.11的副本.jpg
IMG_6821.JPG

I build stage-like spaces for interactive subjects that do not have a specific narrative meaning, but can create a visual atmosphere with a sense of the everyday.

watch the video here

ezgif-7-f3f9578a44d5.gif
src=http---www.jhdance.com-upfile-201807-2018072367133993.jpg&refer=http---www.jhdance.com
再度引发.png
截屏2021-12-02 下午10.27.00.png

Data analysis

I wanted the user to be the "dancer" in the interaction. Thus the media should give the user a real-time movement feedback likes a virtual dance partner. For example, when the user walks forward and raises his or her right hand,  the virtual dance partner react would act in a relevant way.

Since the non-dancer user's movements are random, I could not use the standard dance movements as the judgment node, but had to break down each body movement data of the user for analysis. I therefore used the machine learning to train the dataset. The model may roughly elaborate the relationship between spatial position and body movements among the dancers through training.

截屏2021-12-02 下午11.01.45.png
截屏2021-11-23 下午9.28.11.png

mean training

过程2.JPG
data2.JPG

feedback data (main body spatial data) - to visual part

再度引发.png

Visualization

Through the research of the reference project, I came up with an idea: whether the visualization of dance data could be limited to the character representation. Driven by this thought, I tried to bind the body points under the training model to different visual objects.

By detaching from the " human" figure, the non-dancer user can interact and show more freely, instead of being limited by the visual aesthetics of the " human body", and in this process, I broaden the concept of " dance" itself.

User interaction

Users' body movements data collected by Kinect

FEEDBACK

Trained model of dance interaction

截屏2021-12-02 下午10.27.14.png
IMG_7520.PNG
图片2.jpg
图片3.jpg

Iteration - 1

截屏2021-12-02 下午11.52.10.png
ezgif-3-d8d332fde4c2.gif
ezgif-3-052519b8c443.gif
ezgif-3-43f42f0a103b.gif

Weakness: The flowers image can’t show specific and vivid movements(Low Adjustability).

Lack a symbolic meaning.

computer graphic

Iteration - 2

figurative - flowers

Iteration - 3

figurative - chairs

I turned my attention to the things in daily life, trying to find an object with a suitable symbolic meaning and a high degree of movability.

 

Symbolic meaning:I think chairs are quite similar with human and give people a sense of humor. They have a visual impression as normal, static, straight. The further it is from the impression of dance has on people, the more it creates a subtle contrasting effect when it dances. It is designed to tell the audience: you are also an ordinary chair who can't dance, but this work proves that you can dance too.

截屏2021-12-03 上午12.13.25.png
IMG_7355.jpg
IMG_7354.jpg
IMG_7356.jpg

INSPIRATION: PIXAR

In this section, I got a thinking....

Without 'human figures', we can rebuild the concept of the dance and make it be broaden and be performed by more individuals - those who are none dancers. Blurring the boundary of a simple behavior by an interaction is a path to empower people to do more things.

ezgif-7-077541b6533e.gif
ezgif-7-496761b87234.gif
Data analysis
Visualization
截屏2021-12-04 下午7.50.04.png

Interactive media

Interactive media

Kinect - user movement ditection

I used the Kinect to direct the body movements of users on real time, made those data interact with the media ( the virtual dance partner), and made the users be one of the 'chairs' in the media.

截屏2021-12-04 下午7.58.09.png

user figure

user test

I found that non-dancer users may confuse when they start to engage in the the interaction. They spent time to find where and what they are in the media and what is the feed back.

錨點 1
截屏2021-12-04 下午8.01.48.png
截屏2021-12-04 下午8.01.37.png
ezgif-7-5ad86cc7cbe4.gif
ezgif-2-75c649dbf0a9.gif

I use ‘light’ in the scene to guide users’ attention and help them quickly understand what happened. This ‘light’ is just a spot light on the performance stage and simultaneously create an interaction atmosphere which is fluid and comfortable. 

interaction iteration

This project has an open-ended result, with a given logic chain of data- visual and interaction iteration system (aboved), the output can be verified.

Output 2 (in progress)

OUTPUT 2(in progress)

The second output is a dynamic environment that driven by the responsive data model of choreography.  The investigation on choerography is not only about the body movements, but also the space occupation. When we move, we squeeze the space and form the nagetive space between us. On the other hand, controlling the negative space can direct people's space relationship.

截屏2022-01-24 下午8.30.39.png

A group of fluctuated space formed based on the chereography 

Concept

Using the trained choreography model, a negative space can be formed in a responsive dynamic space can be formed. When subjects in the space change their movements, the environment can respond to their changes by reforming their surounding fluctuations and directing them to build a new space relationship with others. The movements are detected by  both kinect and gravity devices.

103-2-透明-1-01.png

Up flactuation to break up space

Down flactuation to form body space

Wires to control flactuation

Movements and spatial occupations are detected by kinects and gravity devices

Responses

Interaction

103-1-透明.png
未命名作品 22.png

I hereby express my thanks to

Jiabei Zhao & Lyric Zhao (programing advisor),

Yiwhen Zhang & Xianqi Su(dancer)

and Xiaohan Zhang & XI Yang (tester).

bottom of page