interactive augmented reality for dance

8
Interactive Augmented Reality for Dance Taylor Brockhoeft 1 , Jennifer Petuch 2 , James Bach 1 , Emil Djerekarov 1 , Margareta Ackerman 1 , Gary Tyson 1 Computer Science Department 1 and School of Dance 2 Florida State University Tallahassee, FL 32306 USA [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] “Like the overlap in a Venn diagram, shared kinesthetic and intellectual constructs from the field of dance and the field of technology will reinforce and enhance one another, resulting in an ultimately deepened experience for both viewer and performer.” -Alyssa Schoeneman Abstract With the rise of the digital age, dancers and choreog- raphers started looking for new ways to connect with younger audiences who were left disengaged from tra- ditional dance productions. This led to the growing pop- ularity of multimedia performances where digitally pro- jected spaces appear to be influenced by dancers’ move- ments. Unfortunately current approaches, such as re- liance on pre-rendered videos, merely create the illusion of interaction with dancers, when in fact the dancers are actually closely synchronized with the multimedia display to create the illusion. This calls for unprece- dented accuracy of movement and timing on the part of the dancers, which increases cost and rehearsal time, as well as greatly limits the dancers’ creative expression. We propose the first truly interactive solution for inte- grating digital spaces into dance performance: ViFlow. Our approach is simple, cost effective, and fully interac- tive in real-time, allowing the dancers to retain full free- dom of movement and creative expression. In addition, our system eliminates reliance on a technical expert. A movement-based language enables choreographers to directly interact with ViFlow, empowering them to inde- pendently create fully interactive, live augmented real- ity productions. Introduction Digital technology continues to impact a variety of seem- ingly disparate fields from the sciences to the humanities and arts. This is true of dance performance as well, as in- teractive technology incorporated into choreographic works is a prime point of access for younger audiences. Due in no small part to the overwhelming impact of technology on younger generations, the artistic preferences of today’s youth differ radically from those raised with- out the prevalence of technology. This results in the de- cline of youth attending live dance performances (Tepper 2008). Randy Cohen, vice president for research and policy at Americans for the Arts, commented that: “People are not Figure 1: An illustration of interactive augmented reality in a live dance performance using ViFlow. Captured during a recent performance, this image shows a dynamically gen- erated visual effect of sand streams falling on the dancers. These streams of sand move in real-time to follow the lo- cation of the performers, allowing the dancers to maintain freedom of movement. The system offers many other dy- namic effects through its gear-free motion capture system. walking away from the arts so much, but walking away from the traditional delivery mechanisms. A lot of what we’re see- ing is people engaging in the arts differently.” (Cohen 2013). Given that younger viewers are less intrigued by traditional dance productions, dancers and choreographers are looking for ways to engage younger viewers without alienating their core audiences. Through digital technology, dance thrives. Adding a mul- timedia component to a dance performance alleviates the need for supplementary explanations of the choreography. The inclusion of digital effects creates a more easily relat- able experience for general audiences. Recently there has been an effort to integrate augmented reality into dance per- formance. The goal is to use projections that respond to the performers’ movement. For example, a performer raising her arms may trigger a projected explosion on the screen behind her. Or, the dancers may be followed by downwards streams of sand as they move across the stage (see Figure 1). However, current approaches to augmented reality in profes- sional dance merely create the illusion of interaction. Fur- thermore, only a few choreographers today have the tech- nological collaboration necessary to incorporate projection effects in the theater space. 396 Proceedings of the Seventh International Conference on Computational Creativity, June 2016

Upload: vutu

Post on 13-Feb-2017

220 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Interactive Augmented Reality for Dance

Interactive Augmented Reality for Dance

Taylor Brockhoeft1, Jennifer Petuch2, James Bach1, Emil Djerekarov1, Margareta Ackerman1, Gary Tyson1

Computer Science Department1 and School of Dance2Florida State University

Tallahassee, FL 32306 [email protected], [email protected], [email protected], [email protected], [email protected], [email protected]

“Like the overlap in a Venn diagram, shared kinestheticand intellectual constructs from the field of dance and thefield of technology will reinforce and enhance one another,resulting in an ultimately deepened experience for bothviewer and performer.” -Alyssa Schoeneman

Abstract

With the rise of the digital age, dancers and choreog-raphers started looking for new ways to connect withyounger audiences who were left disengaged from tra-ditional dance productions. This led to the growing pop-ularity of multimedia performances where digitally pro-jected spaces appear to be influenced by dancers’ move-ments. Unfortunately current approaches, such as re-liance on pre-rendered videos, merely create the illusionof interaction with dancers, when in fact the dancersare actually closely synchronized with the multimediadisplay to create the illusion. This calls for unprece-dented accuracy of movement and timing on the part ofthe dancers, which increases cost and rehearsal time, aswell as greatly limits the dancers’ creative expression.We propose the first truly interactive solution for inte-grating digital spaces into dance performance: ViFlow.Our approach is simple, cost effective, and fully interac-tive in real-time, allowing the dancers to retain full free-dom of movement and creative expression. In addition,our system eliminates reliance on a technical expert.A movement-based language enables choreographers todirectly interact with ViFlow, empowering them to inde-pendently create fully interactive, live augmented real-ity productions.

IntroductionDigital technology continues to impact a variety of seem-ingly disparate fields from the sciences to the humanitiesand arts. This is true of dance performance as well, as in-teractive technology incorporated into choreographic worksis a prime point of access for younger audiences.

Due in no small part to the overwhelming impact oftechnology on younger generations, the artistic preferencesof today’s youth differ radically from those raised with-out the prevalence of technology. This results in the de-cline of youth attending live dance performances (Tepper2008). Randy Cohen, vice president for research and policyat Americans for the Arts, commented that: “People are not

Figure 1: An illustration of interactive augmented reality ina live dance performance using ViFlow. Captured during arecent performance, this image shows a dynamically gen-erated visual effect of sand streams falling on the dancers.These streams of sand move in real-time to follow the lo-cation of the performers, allowing the dancers to maintainfreedom of movement. The system offers many other dy-namic effects through its gear-free motion capture system.

walking away from the arts so much, but walking away fromthe traditional delivery mechanisms. A lot of what we’re see-ing is people engaging in the arts differently.” (Cohen 2013).Given that younger viewers are less intrigued by traditionaldance productions, dancers and choreographers are lookingfor ways to engage younger viewers without alienating theircore audiences.

Through digital technology, dance thrives. Adding a mul-timedia component to a dance performance alleviates theneed for supplementary explanations of the choreography.The inclusion of digital effects creates a more easily relat-able experience for general audiences. Recently there hasbeen an effort to integrate augmented reality into dance per-formance. The goal is to use projections that respond to theperformers’ movement. For example, a performer raisingher arms may trigger a projected explosion on the screenbehind her. Or, the dancers may be followed by downwardsstreams of sand as they move across the stage (see Figure 1).However, current approaches to augmented reality in profes-sional dance merely create the illusion of interaction. Fur-thermore, only a few choreographers today have the tech-nological collaboration necessary to incorporate projectioneffects in the theater space.

401

396Proceedings of the Seventh International Conference on Computational Creativity, June 2016

Page 2: Interactive Augmented Reality for Dance

(a) Tracking Mask (b) Tracking Identification (c) Performer with an effect behind her

Figure 2: The ViFlow system in action. Figure (a) shows the raw silhouette generated from tracking the IR reflection of theperformer, (b) displays the calculated points within the silhouette identified as the dancer core, hands, and feet, and (c) depictsthe use of these points when applied to effects for interactive performance in the dynamically generated backdrop image.

Florida State University is fortunate to have an estab-lished collaboration between a top-ranked School of Danceand Department of Computer Science in an environmentsupportive of interdisciplinary creative activities. Wherethese collaborative efforts have occurred, we have seen anew artistic form flourish. However, the vast majority ofdance programs and companies lack access to the financialresources and technical expertise necessary to explore thisnew creative space. We believe that this access problem canbe solved through the development of a new generation oflow-cost, interactive video analysis and projection tools ca-pable of providing choreographers direct access to the videolayering that they desire to augment their dance composi-tions.

Augmented dance performances that utilize pre-renderedvideo projected behind performers on stage to create theillusion of interactivity have several notable drawbacks.The dancers must rehearse extensively to stay in sync withthe video. This results in an increase in production timeand cost, and makes it impractical to alter choreographicchoices. Further, this approach restricts the range of mo-tion available to dancers as they must align with a preciselocation and timing. This not only sets limits on improvisa-tion, but restricts the development of creative expression andmovement invention of the dancer and choreographer. If adancer even slightly misses a cue, the illusion is ineffectiveand distracting for the viewer.

A small number of dance companies (Wechsler, Weiß,and Dowling 2004) (Bardainne and Mondot 2015) havestarted to integrate dynamic visual effects through solutionssuch as touch-screen technology (see the following sectionfor details.) However, moving away from static video intodynamically generated visualizations gives rise to a new setof challenges. Dynamic digital effects require a specializedskillset to setup and operate. The complex technical require-ments of such systems often dictate that the visual contenthas to be produced by a separate team of technical develop-ers in conjunction with performing artists. This requirementcan lead to miscommunication as the language incorporatedinto the lexicon of dancers differs significantly from that em-

ployed by computer programmers and graphical designers.This disconnect can impair the overall quality of the per-formance as artists may ask for too much or too little fromtechnical experts because they are unfamiliar with the innerworkings of the technology and its capabilities.

In this paper we introduce ViFlow (short for VisualFlow1), a new system that remedies these problems.Dancers, choreographers, and artists can use our system tocreate interactive augmented reality for live performances.In contrast with previous methods that provide the illusionof interactivity, ViFlow is truly interactive. With minimallow-cost hardware, just an infrared light emitter and an in-frared sensitive webcam, we can track multiple users’ mo-tions on stage. The projected visual effects are then changedin real time in response to the dancers’ movements (see Fig-ure 2 for an illustration). Further, by requiring no physicalgear, our approach places no restriction on movements, in-teraction among dancers, or costume choices. In addition,our system is highly configurable enabling it to be used invirtually any performance space.

With traditional systems, an artist’s vision must be trans-lated to the system through a technical consultant. To elim-inate the need for a technical expert, we have created agesture-based language that allows performers to specify vi-sualization behavior through movement. Visual content isedited on the fly in a fashion similar to that of a dance re-hearsal using our internal gesture based menu system and asimple movement-driven language. Using this movement-based language, an entire show’s visual choreography canbe composed solely by an artist on stage without the need ofan outside technical consultant. This solution expands theartist’s creative space by allowing the artist’s vision to be di-rectly interpreted by the system without a technical expert.

ViFlow was first presented live at Florida State Uni-versity’s Nancy Smith Ficther Theatre on February 19,2016 as part of Days of Dance performance series audi-

1Flow is one of the main components of the dynamics of move-ment. In our system, it also refers to the smooth interaction be-tween the dancer’s movements and the visual effects.

402

397Proceedings of the Seventh International Conference on Computational Creativity, June 2016

Page 3: Interactive Augmented Reality for Dance

tions. This collaborative piece with ViFlow was chosento be shown in full production. Footage of the use ofViFlow by the performers of this piece can be found athttps://www.youtube.com/watch?v=9zH-JwlrRMo.

Related WorksThe dance industry has a rich history of utilizing multimediato enhance performance. As new technology is developed,dancers have explored how to utilize it to enhance their artis-tic expression and movement invention. We will present abrief history of multimedia in dance performances, includ-ing previous systems for interactive performance, and dis-cuss the application of interactive sets in related art forms.We will also present the most relevant prior work on thetechnology created for motion capture and discuss limita-tions of their application to live dance performance.

History of Interactive Sets in DanceMany artists in the dance industry have experimented withthe juxtaposition of dance and multimedia. As early as the1950s, the American choreographer, Alwin Nikolais, waswell known for his dance pieces that incoporated hand-painted slides projected onto the dancers bodies on stage.Over the past decade, more multimedia choreographers inthe dance industry have been experimenting with projec-tions, particularly interactive projection. ChoreographersMiddendorp, Magliano, and Hanabusa used video projec-tion and very well trained dancers to provide an interplaybetween dancer and projection. Lack of true interaction isstill detectable to the audience as precision of movement isdifficult to sustain throughout complex pieces. This has thepotential of turning the audience into judges focusing on thetiming of a piece while missing some of the emotional im-pact developed through the choreography.

In the early 2000s, as technology was becoming more ac-cessible, dance companies started collaborating with tech-nical experts to produce interactive shows with computergenerated imagery (CGI). Adrien M/Claire B used a physicsparticle simulation environment they developed called eMo-tion2 that resulted in effects that looked more fluid. Thiswas achieved by employing offstage puppeteers with tablet-like input devices that they used to trace the movements ofperformers on stage and thus determine the location of theprojected visual effects (Bardainne and Mondot 2015). Syn-chronization is still required, though the burden is eased, be-cause dancers are no longer required to maintain synchro-nized movement. This duty now falls to the puppeteer.

Eyecon (Wechsler, Weiß, and Dowling 2004) is an in-frared tracking-based system utilized in Obarzaneks MortalEngine. The projected effects create a convincing illusionof dancers appearing as bio-fiction creatures in an organic-like environment. However, Eyecon’s solution does not pro-vide the ability to differentiate and individually track eachperformer. As a result, all performers must share the sameeffect. The system does not provide the ability for separatedancers to have separate on-screen interactions. Moreover,Eyecon can only be applied in very limited performance

2eMotion System: http://www.am-cb.net/emotion/

spaces. The software forces dancers to be very close to thestage walls or floor. This is because the tracking mecha-nism determines a dancer’s location by shining infrared lightagainst a highly reflective surface, and then looking for darkspots or “shadows” created by the presence of the dancer.By contrast, we identify the reflections of infrared light di-rectly from the dancers’ bodies, which allows us to reliablydetect each dancer anywhere on the stage without imposinga limit on location, stage size, or number of dancers.

Studies have also been conducted to examine the interac-tions of people with virtual forms or robots. One such studyby (Jacob and Magerko 2015), presents the VAI (ViewpointArtificial intelligence) installation which aims to explorehow well a performer can build a collaborative relationshipwith a virtual partner. VAI allows performers to watch avirtual dance partner react to their own movements. VAI’svirtual dancers move independently, however, VAI’s move-ments are reactive to the movement of the human performer.This enhances the relationship between the dancer and theperformer because VAI appears to act intelligently.

Another study by (Corness, Seo, and Carlson 2015), uti-lized the Sphero robot as a dance partner. In this study, theSphero robot was remotely controlled by a person in anotherroom. Although the performer was aware of this, they hadno interaction with the controller apart from dancing withthe Sphero. In this case, the performer does not only drive,but must also react to the independent choices made by theSphero operator. Users reported feeling connected to the de-vice, and often compared it to playing with a small child.

Interactivity in performance can even extend past theartist’s control and be given to the audience. For LAIT(Laboratory for Audience Interactive Technologies) audi-ence members are able to download an application to theirphones that allows them to directly impact and interact withthe show(Toenjes and Reimer 2015). Audience memberscan then collectively engage in the performance, changingcertain visualizations or triggering cues. It can be used toallow an audience member to click on a button to signalrecognition of a specific dance gesture or to use aggregateaccelerometer data of the entire audience to drive a particlesystem projected on a screen behind the performers.

Interactive Sets in Other Art FormsMultimedia effects and visualizations are also being usedwith increasing frequency in the music industry. A num-ber of large international music festivals, such as A Stateof Trance and Global Gathering, have emerged over the lastfifteen years that rely heavily on musically driven visual andinteractive content to augment the overall experience for theaudience. A recent multimedia stage production for musi-cian Armin Van Buuren makes use of motion sensors at-tached on the arm of the artist to detect movements, whichin turn trigger a variety of visual effects.3

The use of technology with dance performance is not lim-ited to live productions. Often, artists will produce dancefilms to show their piece. As an example, the piece Un-

3Project by Stage Design firm 250K, Haute Technique, andThalmic Labs Inc. https://www.myo.com/arminvanbuuren

403

398Proceedings of the Seventh International Conference on Computational Creativity, June 2016

Page 4: Interactive Augmented Reality for Dance

named Sound-Sculpture, by Daniel Franke, used multipleMicrosoft Kinect devices to perform a 3D scan of a dancer’smovements (Franke 2012). Subsequently, the collected datawas used to create a computer generated version of the per-former that could be manipulated by the amplitude of theaccompanying music.

Motion Capture Approaches (Tracking)

Many traditional motion capture systems use multiple cam-eras with markers on the tracked objects. Such systems areoften used by Hollywood film studios and professional gamestudios. These systems are very expensive and require a highlevel of technical expertise to operate. Cameras are arrangedin multiple places around a subject to capture movementin 3D space. Each camera must be set up and configuredfor each new performance space and requires markers onthe body, which restrict movement and interaction amongdancers. (Sharma et al. 2013)

Microsoft’s Kinect is a popular tool that does not requiremarkers and is used for interactive artwork displays, ges-ture control, and motion capture. The Kinect is a 3D depthsensing camera. User skeletal data and positioning is easilygrabbed in real time. However Kinect only has a workingarea of about 8x10 feet, resulting in a limited performancespace, thus rendering it impractical for professional produc-tions on a traditional Proscenium stage, which is generallyabout 30x50 feet in size. (Shingade and Ghotkar 2014).

Organic motion capture4 is another marker-less systemthat provides 3D motion capture. It uses multiple camerasto capture motion, but requires that the background environ-ment from all angles be easily distinguishable from the per-former, so that the system can accurately isolate the movingshapes and build a skeleton. Additionally, the dancers areconfined to a small, encapsulated performance space.

Several researchers (Lee and Nevatia 2009), (Peursum,Venkatesh, and West 2010), (Caillette, Galata, and Howard2008) have built systems using commercial cameras that relyheavily on statistical methods and machine learning modelsto predict the location of a person’s limbs during body move-ment. Due to the delay caused by such computations, thesesystems are too slow to react and cannot perform in real time(Shingade and Ghotkar 2014).

One of the most accurate forms of movement tracking isbased on Inertial Measurement Units (IMUs) that measureorientation and acceleration of a given point in 3D spaceusing electromagnetic sensors. Xsens5 and Synertial6 havepioneered the use of many IMUs for motion capture suitswhich are worn by performers and contain sensors along allmajor joints. The collected data from all sensors is usedto construct an accurate digital three dimensional version ofthe performer’s body. Due to their complexity, cost, andhigh number of bodily attached sensors, IMU systems arenot considered a viable technology for live performance.

4Organic Motion - http://www.organicmotion.com/5Xsens IMU system - www.xsens.com6Synertial - http://synertial.com/

Setup and System DesignViFlow has been designed specifically for live performancewith minimal constraints on the performers. The system isalso easy to configure for different spaces. The camera canreceive information from a variety of different camera setupsand is therefore conducive to placement in a wide spectrumof dance venues. By using Infrared(IR) light in the primarytracking system, it also enables conventional lighting setupsranging from very low light settings to fully illuminated out-door venues.

Hardware and Physical SetupViFlow requires three hardware components: A cameramodified to detect light in the infrared spectrum, infraredlight emitters, and a computer running the ViFlow software.We utilize infrared light because it is invisible to the audi-ence and results in a high contrast video feed that alleviatesthe process of isolating the performers from the rest of theenvironment, when compared to a regular RGB video feed.By flooding the performance space with infrared light, wecan identify the location of each performer within the frameof the camera. At the same time, ViFlow does not processany of the light in the visible spectrum and thus is not influ-enced by stage lighting, digital effect projections, or colorfulcostumes.

Most video cameras have a filter over the image sensorthat blocks infrared light and prevents overexposition of thesensor in traditional applications. For ViFlow, this filter isreplaced with the magnetic disk material found in old floppydiskettes. This effectively blocks all visible light while al-lowing infrared light to pass through.

In order to provide sufficient infrared light coverage foran entire stage, professional light projectors are used in con-juction with a series of filters. The exact setup consists ofRoscolux7 gel filters - Yellow R15, Magenta R46, and CyanR68 layered to make a natural light filter, in conjuction withan assortment of 750-1000 watt LED stage projectors. SeeFigure 3 for an illustration.

The projector lights are placed around the perimeter ofthe stage inside the wings (see Figure 4). At least two lightsshould be positioned in front of the stage to provide illumi-nation to the center stage area. This prevents forms from be-ing lost while tracking in the event that one dancer is block-ing light coming from the wings of the stage.

The camera placement is arbitrary and can be placed any-where to suit the needs of the performance. However, caremust be taken to handle possible body occlusions (i.e. twodancers behind each other in the camera’s line of sight) whenmultiple performers are on stage. To aleviate this problem,the camera can be placed high over the front of the stageangled downwards. (see Figure 4)

ViFlow SoftwareThe software developed for this project is split into two com-ponents: the Tracking Software and the Rendering/Effectcreation software. The tracking software includes data col-lection, analysis, and transmission of positional data to the

7Roscolux is a brand of professional lighting gels.

404

399Proceedings of the Seventh International Conference on Computational Creativity, June 2016

Page 5: Interactive Augmented Reality for Dance

Figure 3: Gels may be placed in any order on the gel exten-der. We used LED lighting, which runs much cooler thantraditional incandescent lighting.

front end program, where it displays the effects for a per-formance. ViFlow makes use of OpenCV, a popular opensource computer vision framework. ViFlow must be cali-brated to the lighting for each stage setup. This profile canbe saved and reused later. Once calibrated, ViFlow can getdata on each performer’s silhouette and movement.

At present, there are certain limitations in the tracking ca-pabilities of ViFlow. Since a traditional 2D camera is used,there is only a limited amount of depth data that can be de-rived. Because of the angled setup of the camera, we doobtain some depth data through interpolation on the y axis,but it lacks the fine granularity for detecting depth in smallmovements. Fortunately, performances do not rely on veryfine gesture precision, and dancers naturally seem to employexaggerated, far-reached gestures designed to be clearly vis-ible and distinguishable to larger audiences. In working withnumerous dancers, we have found that this more theatricalmovement seems to be instilled in them both on and offstage.

Visual EffectsThe front end uses Unity3D by Unity Technologies8 for dis-playing the visual medium. Unity3D is a cross-platformgame engine that connects the graphical aspects of devel-oping a game to JavaScript or C# programming. Unity hascustomization tools to generate content and is extensibleenough to support the tracker. The front end consists of fiveelements: a camera, a character model, an environment, vi-sual effects, and an interactive menu using gesture controlwhich is discussed in more detail in following sections.

The camera object correlates to what the end-user will seein the environment and the contents of the camera viewport

8Unity3D can be downloaded from https://unity3d.cpm

Figure 4: Positioning of the camera and lights in our instal-lation at the Nancy Smith Fichter Dance Theatre at FloridaState University’s School of Dance. Lights are arranged toprovide frontal, side, and back illumination. Depending onthe size of the space, additional lights may be needed for fullcoverage. (Lights are circled in diagram.)

are projected onto the stage. The visual perspective is both2D and 3D to support different styles of effects.

The character model belongs to a collection of objectsrepresenting each performer. Each object is a collection oftwo attached sphere colliders for hand representations and abody capsule collider as seen in Figure 6. The colliders arepart of the Unity engine and are the point of interaction andtriggers menus, environmental props, and interactive effects.

Environments consist of multiple objects including, walls,floors, and ceilings of various shapes and colors. Aestheticconsiderations for these objects are applied per performanceor scene such as Figure 7. Most of our environmental tex-tures consist of creative usage of colors, abstract art, and freeart textures.

The effects are delivered in a variety of methods suchas interactive objects, particle systems, and timed effects.Some objects are a combination of other effects designed to

405

400Proceedings of the Seventh International Conference on Computational Creativity, June 2016

Page 6: Interactive Augmented Reality for Dance

(a) Tracking Output (b) Tracking Mask

Figure 5: Four figures being tracked with our tracking soft-ware. Each individual is bathed in infrared light, thus allow-ing us to easily segment their form from the background.This shot is from the camera angle depicted in Figure 4.

Figure 6: Character Model Object. The small orbs are thecolliders for hand positions and the larger capsule is the bodycollider.

deliver a specific effect such as an interactive object that willtrigger a particle system explosion upon interaction with aperformer.

The particle system delivers ambience and interactive ef-fects like rain, fog, waterfalls, fire, shiny rainbow flares, orexplosions. ViFlow’s effects provide a set of adjustable fea-tures such as color, intensity, or direction. The particle sys-tems have been preconfigured as interactive effects such asa sand waterfall that splashes off the performers as seen inFigure 1 or a wildfire trail that follows the performers in Fig-ure 8.

Some effects involve environmental objects that thedancer can interact with. One effect is a symmetric wall oforbs that cover the lower portion of the 2D viewport. Whentouched by the performer’s Unity collider, these dots havepreconfigured effects such as shrinking, floating up, or justspiraling away. The customizations supported for the per-formers allow them to place the effects in specific locations,change their colors, and adjust to predefined effects.

Lastly, there are global effects that can be both environ-mentally aesthetic, such as sand storms and snow falls, orinteractive such as a large face that watches the dancer andresponds based on their position. The face might smile whenthey are running and frown when they are not moving, or

Figure 7: This static environment is the lower part of anhourglass, used in a performance whose theme centers ontime manipulation. The dancers in this piece interact with asand waterfall flowing out of the hourglass.

Figure 8: Two Unity particle systems, one used as an inter-active fire effect and the other is a triggered explosion.

turn left and right as the dancers are moving stage left orright.

Communication Gap Between Dancers andTechnologistsMultimedia productions in the realm of performing arts aretraditionally complex due to the high degree of collabora-tion and synchronization that is required between artists onstage and the dedicated technical team behind the scenes.Working in conjunction with a technical group necessitatesa significant time investment for synchronization of multi-media content and dance choreography. Moreover, thereare a number of problems that arise due to the vastly dif-ferent backgrounds of artists and technicians in relation tolinguistic expression. In order to address these communica-tion difficulties, we developed a system which allows artiststo directly control and configure digital effects without theneed for additional technical personnel by utilizing a seriesof dance movements which collectively form a gesture basedmovement language within ViFlow.

One of the main goals of our system is to enhance the ex-pressive power of performing artists by blending two tradi-tionally disjoint disciplines - dance choreography and com-puter vision. An important take away from this collabora-tion is the stark contrast and vast difference in the language,phrasing, and style of expression used by dancers and thosewith computing oriented backgrounds. The linguistic gap

406

401Proceedings of the Seventh International Conference on Computational Creativity, June 2016

Page 7: Interactive Augmented Reality for Dance

between these two groups creates a variety of developmentchallenges such as system requirements misinterpretationsand difficulties in creating agreed upon visual content.

To better understand the disparity between different peo-ple’s interpretations of various visual effects provided by oursystem, we asked several dancers and system developers todescribe visual content in multimedia performances. Thephrasing used to describe the effects and dancer interactionsof the system were highly inconsistent, as well as a potentialsource of ambiguity and conflict during implementation.

Dancers and developers were separately shown a batch ofvideo clips of dance performances that utilized pre-renderedvisual effects. Each person was asked to describe the effectthat was shown in the video. The goal was to see how thetwo different groups would describe the same artistic visualcontent, and moreover, to gain some insight into how wellpeople with a non-artistic, technical background could inter-pret a visual effect description coming from an artist.

The collected responses exposed two major issues. First,the descriptions were inconsistent from person to person,and second, that there was a significant linguistic gap be-tween artists and people with a computing background. Asan example, consider this description of a visual effect writ-ten by a dancer: ”I see metallic needles, projected onto adark surface behind a solo dancer. They begin subtly, as ifonly a reference, and as they intensify and grow in numberwe realize that they are the echoes of a moving body. Theyappear as breathing, rippling, paint strokes, reflecting mo-tion”. A different dancer describes the same effect as ”sun-light through palm fronds, becomes porcupine quills beingruffled by movement of dancer”. A system developer on theother hand, described the same visual effect as ”a series ofsmall line segments resembling a vector field, synchronizedto dance movements”. It is evident that the descriptions aredrastically different.

This presents a major challenge as typically, a technicianwould have to translate artists descriptions into visual ef-fects. Yet, the descriptions provided by dancers leave a lotof room for personal interpretation, and lead to difficultiesfor artists and technicians when they need to reach agree-ment on how a visualization should look like on screen. Inorder to address this critical linguistic problem, our systemincorporates a dance derived, gesture-based, motion systemthat allows performers to parameterize effects directly bythemselves while dancing, without having to go through atechnician who would face interpretation difficulties. Thisallows dancers a new level of artistic freedom and inde-pendence, empowering them to fully incorporate interactiveprojections into their creative repertoire.

Front End User Interface and Gesture ControlOur interactive system strives to eliminate the need for atechnician to serve as an interpreter, or middleman, betweenan artists original vision and the effects displayed duringa performance. As discussed above, a number of linguis-tic problems make this traditional approach inefficient. Weaddress this problem by implementing a direct dance-basedgesture control, which is used for user interactions with thesystem as well as customizing effects for a performance.

The system has two primary modes of operation: a show-time mode which is used to run and display the computerizedvisual component of the choreographed performance duringrehearsals or production, and an edit mode which is used tocustomize effects and build the sequence of events for a per-formance. In other words, edit mode is used to build andprepare the final show-time product.

Edit mode implements our novel gesture-based approachfor direct artist control of computer visualizations. It utilizesa dancer’s body language (using the camera input as previ-ously described in the System Setup and Design Section) tocontrol the appearance of digital content in ViFlow.

Effects are controlled and parameterized by the body lan-guage and movements of the dancer. A number of para-maters are controlled through different gestures. For exam-ple, when configuring a wildfire trail effect, shown in Figure8, the flame trail is controlled by the movement speed ofa dancer on stage, while the size of the flame is controlledvia hand gestures showing expansion as the arms of a dancermove away from each other. In a different scenario, in whicha column of sand is shown as a waterfall behind a dancer,arm movements from left to right and up and down are usedto control the speed of the sand waterfall, as well as the di-rection of the flow. Depending on the selected effect, differ-ent dance movements control different parameters. Since alleffects are designed for specific dance routines, this effec-tively creates a dance derived movement-gesture language,which can be naturally and intuitively used by a dancer tocreate the exact visual effects desired.

When a dancer is satisfied with the visualization that hasbeen created, it is saved and added to a queue of effects tobe used later during the production. Each effect in the queueis supplied with a time at which it should be loaded. Whena dancer is ready, this set of effects and timings are savedand can be used during the final performance in show-timemode.

Discussion: Creativity Across DomainsThis interdisciplinary research project brought together twofields with different perspectives on what it means to be cre-ative. In our joint work we learned to appreciate both thedifferences in how we approach the creative process and ourgoals for the final product.

From the perspective of dance and choreography, thisproject charts new territories. There is no precedent for al-lowing the choreographer this degree of freedom with inter-active effects on a full scale stage, and very little in the wayof similar work. This leaves the creative visionary with aworld of possibilities with respect to choreographic choices,visual effects, and creative interpretation, all of which mustbe pieced together into a visually stunning performance. Thechallenge lies in part in searching the vast creative spaceas well as the desire to incorporate creative self-expression,which plays a central role in the arts.

In sharp contrast, our computer science team was giventhe well-defined goal of creating interactive technology thatwould work well in the theater space. This greatly limitedour search space and provided a clear method for evaluatingour work: If the technology works, then we’re on the right

407

402Proceedings of the Seventh International Conference on Computational Creativity, June 2016

Page 8: Interactive Augmented Reality for Dance

track. Our end goal can be defined as an ”invention”, wherethe focus is on the usefulness of our product - though in orderto be a research project it also had to be novel. Unlike thegoals of choreography in our project, self-expression playedno notable part for the computer science team.

Another intriguing difference is how we view the impor-tance of the process versus the final product. Innovationin the realm of computing tends to be an iterative process,where an idea may start out as a research effort, with inter-mediate steps demonstrated with a proof-of-concept imple-mentation. Emphasis is placed on the methodology behindthe new device or software product.

On the other hand, most dance choreographers focus pri-marily on the end result without necessarily emphasizing themethodology behind it. At all phases of the creative process,choreographers evaluate new ideas with a strong emphasison how the finished product will be perceived by the audi-ence. In the technological realm, the concern for generalaudience acceptance is only factored in later in the process.

During the early stages of ViFlow development, one ofthe critiques coming from dance instructors after seeing atrial performance was that ”the audience will never real-ize all that went into the preliminary development process,”and that the technique for rendering projections (i.e. pre-recorded vs. real-time with dancer movement tracking) isirrelevant to the final performance from an audience’s pointof view. In a sense, a finished dance performance does notmake it a point to market its technological components, asthis is merely an aspect of backstage production. Technol-ogy related products on the other hand are in large part dif-ferentiated not only based on the end goal and functionality,but also on the methodology behind the solution.

ConclusionsViFlow has been created to provide a platform for the pro-duction of digitally enhanced dance performance that is ap-proachable to choreographers with limited technical back-ground. This is achieved by moving the creation of visualprojection effects from the computer keyboard to the perfor-mance stage in a manner more closely matching the dancechoreographic construction.

ViFlow integrates low-cost vision recognition hardwareand video projection hardware with software developed atFlorida State University. The prototype system has beensuccessfully integrated into public performance pieces in theCollege of Dance and continues to be improved as new tech-nology becomes available, and as we gain more experiencewith the ways in which choreographers choose to utilize thesystem.

The use of ViFlow empowers dancers to explore visual-ization techniques dynamically, at the same time and in thesame manner as they explore dance technique and move-ment invention in the construction of a new performance. Indoing so, ViFlow can significantly reduce production timeand cost, while greatly enhancing the creative pallet for thechoreographer. We anticipate that this relationship will con-tinue into the future and hope that ViFlow will be adoptedby other university dance programs and professional dancecompanies. While we have targeted production companies

as the primary target for ViFlow development, we believethat the algorithms can be used in a system targeting indi-vidual dancers who would like to explore interactive visual-izations at home.

References[Bardainne and Mondot 2015] Bardainne, C., and Mondot,A. 2015. Searching for a digital performing art. In ImagineMath 3. Springer. 313–320.

[Caillette, Galata, and Howard 2008] Caillette, F.; Galata,A.; and Howard, T. 2008. Real-time 3-d human body track-ing using learnt models of behaviour. Computer Vision andImage Understanding 109(2):112–125.

[Cohen 2013] Cohen, P. 2013. A new survey finds a drop inarts attendance. New York Times, September 26.

[Corness, Seo, and Carlson 2015] Corness, G.; Seo, J. H.;and Carlson, K. 2015. Perceiving physical media agents:Exploring intention in a robot dance partner.

[Franke 2012] Franke, D. 2012. Unnamed sound-sculpture. http://onformative.com/work/unnamed-soundsculpture. Accessed: 2016-02-29.

[Jacob and Magerko 2015] Jacob, M., and Magerko, B.2015. Interaction-based authoring for scalable co-creativeagents. In Proceedings of International Conference on Com-putational Creativity.

[Lee and Nevatia 2009] Lee, M. W., and Nevatia, R. 2009.Human pose tracking in monocular sequence using multi-level structured models. Pattern Analysis and Machine In-telligence, IEEE Transactions on 31(1):27–38.

[Peursum, Venkatesh, and West 2010] Peursum, P.;Venkatesh, S.; and West, G. 2010. A study on smoothingfor particle-filtered 3d human body tracking. InternationalJournal of Computer Vision 87(1-2):53–74.

[Sharma et al. 2013] Sharma, A.; Agarwal, M.; Sharma, A.;and Dhuria, P. 2013. Motion capture process, techniques andapplications. Int. J. Recent Innov. Trends Comput. Commun1:251–257.

[Shingade and Ghotkar 2014] Shingade, A., and Ghotkar,A. 2014. Animation of 3d human model using mark-erless motion capture applied to sports. arXiv preprintarXiv:1402.2363.

[Tepper 2008] Tepper, S. J. 2008. Engaging art: the nextgreat transformation of America’s cultural life. Routledge.

[Toenjes and Reimer 2015] Toenjes, J. M., and Reimer, A.2015. Lait the laboratory for audience interactive technolo-gies: Dont turn it off turn it on!. In The 21st InternationalSymposium on Electronic Art.

[Wechsler, Weiß, and Dowling 2004] Wechsler, R.; Weiß, F.;and Dowling, P. 2004. Eyecon: A motion sensing tool forcreating interactive dance, music, and video projections. InProceedings of the AISB 2004 COST287-ConGAS Sympo-sium on Gesture Interfaces for Multimedia Systems, 74–79.Citeseer.

408

403Proceedings of the Seventh International Conference on Computational Creativity, June 2016