auditory feedback through continuous control of crumpling ... · pdf fileauditory feedback...

6
Auditory feedback through continuous control of crumpling sound synthesis Roberto Bresin Department of Speech, Music and Hearing KTH, Stockholm, Sweden [email protected] Stefano Delle Monache, Federico Fontana, Stefano Papetti, Pietro Polotti Dipartimento di Informatica University of Verona, Italy {SURNAME}@sci.univr.it Yon Visell Centre for Intelligent Machines and CIRMMT McGill University, Montreal, Canada [email protected] ABSTRACT A realtime model for the synthesis of crumpling sounds is presented. By capturing the statistics of short sonic tran- sients which give rise to crackling noise, it allows for a con- sistent description of a broad spectrum of audible physical processes which emerge in several everyday interaction con- texts. The model drives a nonlinear impactor that sonifies every transient, and it can be parameterized depending on the physical attributes of the crumpling material. Three dif- ferent scenarios are described, respectively simulating the foot interaction with aggregate ground materials, augment- ing a dining scenario, and affecting the emotional content of a footstep sequence. Taken altogether, they emphasize the potential generalizability of the model to situations in which a precise control of auditory feedback can significantly in- crease the enactivity and ecological validity of an interface. Author Keywords Interaction design, auditory feedback, walking interfaces, crumpling sound. ACM Classification Keywords H.5.5 Information Interfaces and Presentation: Sound and Music Computing—Modeling INTRODUCTION It is a common everyday experience to hear sounds that, de- spite conveying information about a well-defined event, are formed by a sequence of short sonic transients. Despite a strong integration of such sounds at the cognitive level, they are clearly perceived to be composed of shorter, semantically indistinguishable units which, taken together, concur to de- fine the event. It is the ecological experience of the physi- cal world that enables listeners to attach individual labels to the composite sounds. We will refer to them as crumpling sounds. Crumpling sounds are generated by a broad spectrum of phys- ical phenomena, many of those manifesting themselves in everyday contexts: a fire burning, a cob of corn kernels ex- posed to the heat of a flame, a crunched piece of paper, a can crushing, a moka that spills the coffee out of the kettle. All these phenomena belie a common temporal process origi- nating with the transition toward a minimum-energy config- uration of an ensemble of microscopic systems, by way of a set of dynamic events whose energy and transition time depend on the individual characteristics of each system and the amount of power it absorbs while changing configura- tion. Such a process cannot be appreciated at the micro- scopic level by our auditory system. Conversely, listeners can discriminate well if a fire is extinguishing or revitalizing, if popcorn is ready or needs more heating to cook well, and if a piece of paper or a can have been crunched gently or, in- stead by means of a strong action. It has been demonstrated that humans are very skilled in perceptually grouping eco- logical sounds by their temporal pattern, such as in the per- ceptual distinction between bouncing and breaking sounds [22]. Crumpling sounds are informative of the macroscopic state of a composite system, furthermore they disclose tem- poral dynamic aspects which depend on the macroscopic properties of the system, as well as on the overall energy that is being dissipated. Interestingly, physics provides an exact formalization of particle systems, which share similar- ities with systems producing crumpling sounds concerning their damping and acoustic transmission properties [19]. The idea of sonifying sequences of short transients to pro- vide information about the event causing them initially re- sulted in feed-forward paradigms [11]. The audification of earthquakes and underground explosions through the audi- tory display of seismic signals was probably the first attempt to infer qualitative information about physical phenomena by means of sound [8]. Such signals produce crumpling sounds once they are transposed onto an audible time scale [21]. More recently, the synthesis of such sounds has taken on a role in the realm of enactive interfaces [15][1], thanks to the development of physical model-based algorithms for the parametric synthesis of so-called “particle” sounds. Such techniques have been used to control and inform the gener- ation in real time of hand clapping and walking sounds [5, 17]. On the side of artistic production, the use of such sounds dates back to the use of ecologically-motivated granular syn- thesis, for an extensive review see [18], and has inspired the development of physically-informed virtual musical instru- ments, such as maracas [4]. The research presented here addresses the problem of de- signing a real time parametric sound generation model that is applicable to diverse multimodal interaction scenarios where crumpling can augment the informational value of the audi- tory feedback. The generality of the approach is made pos- 1 23

Upload: lecong

Post on 22-Mar-2018

219 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Auditory feedback through continuous control of crumpling ... · PDF fileAuditory feedback through continuous control of ... Crumpling sounds are informative of the macroscopic

Auditory feedback through continuous control ofcrumpling sound synthesis

Roberto BresinDepartment of Speech,

Music and HearingKTH, Stockholm, Sweden

[email protected]

Stefano Delle Monache,Federico Fontana, Stefano

Papetti, Pietro PolottiDipartimento di InformaticaUniversity of Verona, Italy{SURNAME}@sci.univr.it

Yon VisellCentre for Intelligent

Machines and CIRMMTMcGill University,Montreal, Canada

[email protected]

ABSTRACTA realtime model for the synthesis of crumpling sounds ispresented. By capturing the statistics of short sonic tran-sients which give rise to crackling noise, it allows for a con-sistent description of a broad spectrum of audible physicalprocesses which emerge in several everyday interaction con-texts. The model drives a nonlinear impactor that sonifiesevery transient, and it can be parameterized depending onthe physical attributes of the crumpling material. Three dif-ferent scenarios are described, respectively simulating thefoot interaction with aggregate ground materials, augment-ing a dining scenario, and affecting the emotional content ofa footstep sequence. Taken altogether, they emphasize thepotential generalizability of the model to situations in whicha precise control of auditory feedback can significantly in-crease the enactivity and ecological validity of an interface.

Author KeywordsInteraction design, auditory feedback, walking interfaces,crumpling sound.

ACM Classification KeywordsH.5.5 Information Interfaces and Presentation: Sound andMusic Computing—Modeling

INTRODUCTIONIt is a common everyday experience to hear sounds that, de-spite conveying information about a well-defined event, areformed by a sequence of short sonic transients. Despite astrong integration of such sounds at the cognitive level, theyare clearly perceived to be composed of shorter, semanticallyindistinguishable units which, taken together, concur to de-fine the event. It is the ecological experience of the physi-cal world that enables listeners to attach individual labels tothe composite sounds. We will refer to them as crumplingsounds.

Crumpling sounds are generated by a broad spectrum of phys-ical phenomena, many of those manifesting themselves ineveryday contexts: a fire burning, a cob of corn kernels ex-posed to the heat of a flame, a crunched piece of paper, a cancrushing, a moka that spills the coffee out of the kettle. Allthese phenomena belie a common temporal process origi-nating with the transition toward a minimum-energy config-uration of an ensemble of microscopic systems, by way of

a set of dynamic events whose energy and transition timedepend on the individual characteristics of each system andthe amount of power it absorbs while changing configura-tion. Such a process cannot be appreciated at the micro-scopic level by our auditory system. Conversely, listenerscan discriminate well if a fire is extinguishing or revitalizing,if popcorn is ready or needs more heating to cook well, andif a piece of paper or a can have been crunched gently or, in-stead by means of a strong action. It has been demonstratedthat humans are very skilled in perceptually grouping eco-logical sounds by their temporal pattern, such as in the per-ceptual distinction between bouncing and breaking sounds[22]. Crumpling sounds are informative of the macroscopicstate of a composite system, furthermore they disclose tem-poral dynamic aspects which depend on the macroscopicproperties of the system, as well as on the overall energythat is being dissipated. Interestingly, physics provides anexact formalization of particle systems, which share similar-ities with systems producing crumpling sounds concerningtheir damping and acoustic transmission properties [19].

The idea of sonifying sequences of short transients to pro-vide information about the event causing them initially re-sulted in feed-forward paradigms [11]. The audification ofearthquakes and underground explosions through the audi-tory display of seismic signals was probably the first attemptto infer qualitative information about physical phenomenaby means of sound [8]. Such signals produce crumplingsounds once they are transposed onto an audible time scale[21]. More recently, the synthesis of such sounds has takenon a role in the realm of enactive interfaces [15][1], thanksto the development of physical model-based algorithms forthe parametric synthesis of so-called “particle” sounds. Suchtechniques have been used to control and inform the gener-ation in real time of hand clapping and walking sounds [5,17]. On the side of artistic production, the use of such soundsdates back to the use of ecologically-motivated granular syn-thesis, for an extensive review see [18], and has inspired thedevelopment of physically-informed virtual musical instru-ments, such as maracas [4].

The research presented here addresses the problem of de-signing a real time parametric sound generation model that isapplicable to diverse multimodal interaction scenarios wherecrumpling can augment the informational value of the audi-tory feedback. The generality of the approach is made pos-

1

23

Page 2: Auditory feedback through continuous control of crumpling ... · PDF fileAuditory feedback through continuous control of ... Crumpling sounds are informative of the macroscopic

Figure 1. A patch for Max/MSP from the Sound Design Toolbox, implementing the crumpling model.

sible by two assumptions that strengthening practicality oftheir use for rendering the perceived acoustical propertiesof a wide range of ecological as well as non-ecological [?]phenomena: i) the intensity E of the short transients thatare produced by phenomena giving rise to crackling noisefollows the known stochastic power law, whose probabil-ity p(E) = Eγ characterizes different types of noise de-pending on the value taken by γ (for instance, in the case ofcrumpling paper it is −1.6 < γ < −1.3) [21]; ii) the tem-poral sequences of transients can be modeled by stationaryPoisson processes provided their arriving times are assumednot to depend on each other [16]. Besides their mathemati-cal foundation, whose understanding requires some facilitywith physics and probability theory, both assumptions arequite general and translate into modeling constraints that canbe easily determined. Indeed, power laws as well as Pois-son distributions with independent arrival times are involvedin many fundamental processes in physics [20]. With somespeculation, the generality of such processes allows, to thinkof crumpling in phenomenological terms just as with otherecological sounds, whose perception is inherently anchoredto the type of sound source they evoke [9].

Even if the two assumptions above determine the dynam-ics of a process, the sound generation model under designis still left with a crucial degree of freedom, given by thespecification of transient characteristics. This specificationin fact provides the model with information on the physi-cal attributes of the system in which the process takes place.In this paper we will show that by adopting a general formof nonlinear impactor to prototype the transient, the modelcan accommodate different interaction contexts via a propertuning of the impacting parameters [2].

After a short introduction to the model, we present three ex-ample applications involving high level control of crumplingsounds for the simulation of different everyday sonic experi-ences. While a rigorous experimental validation of the inter-action contexts presented in this work has still to be made,

the judgments that have been obtained from users during in-formal testing of the ecological validity, acceptability, andengagement of such contexts are promising. In this sense,the prototype applications described in what follows addressa key passage toward a design process for multimodal inter-faces displaying crumpling sounds, and they foreshadow forsuch sounds a significant role as components of closed-loopinteraction paradigms providing contextual auditory feed-back through their continuous control.

CONTINUOUS CONTROL OF CRUMPLING SOUNDSThis research stems from previous work [7], in which a pre-liminary model for the synthesis of crumpling sounds wasproposed, tailored to the rendering of can crushing and walk-ing, or running, events. It relied on a fairly sophisticate con-trol strategy for the distribution of an initial amount of (po-tential) energy across the temporal process, and operated i)at the macroscopic level, by determining the arrival time andintensity of the crumpling process respectively using powerlaw and Poisson statistics, and well as ii) at the microscopiclevel, by driving the physical parameters of mass, elasticityand dissipation of both the impacting and the impacted bod-ies [2].

That model did not permit to control the evolution of anevent continuously. The limitations of such a control strat-egy become clear as soon as we consider closed-loop inter-actions, whose feedback can not only be triggered by anevent when it starts, but conversely must be informed byevery instantaneous change concerning the evolution of theevent. Recalling the examples mentioned above, considerthe continuously changing force of a hand that crushes asheet of paper, or a walking task that takes place on floorscomprising several types of ground materials in such a waythat different surfaces interact with the foot during a singlestep. Motivated by such ideas, we have designed a moreflexible sound synthesis model, that still relies on the samemacroscopic rules (i.e. transients ruled by power law plusPoisson arrival times) and microscopic prototype (i.e. non-

2

24

Page 3: Auditory feedback through continuous control of crumpling ... · PDF fileAuditory feedback through continuous control of ... Crumpling sounds are informative of the macroscopic

linear impact), meanwhile being continuously controllable.

In the new model, synthesis of crumpling sounds is driven bytwo parameters: force, which sets the instantaneous powerthat enters the overall system, and resistance, which deter-mines how resistant the microscopic systems are, on aver-age, to the applied force before their state changes. Thehigher the resistance, the smaller the average number of crum-pling events and, consequently, the stronger their average in-tensity in front of an exerted force. Force and resistance arerespectively mapped onto the power law γ parameter andthe density of the stochastic temporal process via simple al-gebraic relationships. Concerning the physical attributes ofthe system, parameters of center frequency, decay time, stiff-ness, and contacting surface of the impacting bodies are ex-posed to control. These have been demonstrated to play asignificant perceptual role in the auditory discrimination ofcolliding bodies [10].

Based on these design specifications, two external objectshave been included in the Sound Design Tools (SDT) li-brary,1 synthesizing respectively the crumpling events andthe interaction between colliding bodies. These externalshave been realized in C++ upon the flext library2, to ensurecross-compilation under Windows and MacOS and compat-ibility with the Max/MSP3 real time environment as well asthe PureData open platform4. The example patch, shownin Figure 1, refers to the control contcrump external,which computes the intensity and arrival time of every tran-sient, and the impact∼ abstraction, which contains theexternal computing the collisions. Furthermore the patch ex-poses the controls that drive the sound synthesis.

CRUMPLING UNDERFOOT: THE ECOTILE PROJECTA floor component called the EcoTile has been created withthe aim of supplying interactive audio-haptic simulations ofecological ground materials in walking, based on the crum-pling model. Walking is an enactive activity which is contin-uously negotiated in direct contact with its surroundings, de-pendent on the mutual morphology and material propertiesof both. Informative ground materials for walkers have longplayed a role in the design of urban environments. Passivehaptic indicators are commonly used to signify importantlocations including stairways, crosswalks or subway plat-forms, and outdoor paths are designed to be readily distin-guished by their material properties. However, the possi-bility of conveying these kinds of information through ac-tive devices embedded in real spaces in which people walkremains basically unknown. Devices that have been devel-oped for interactive simulation in walking have been typi-cally confined to laboratories or other closed environments.

The EcoTile project aims at providing dynamic control overthe ecological information that is available in walking, al-lowing to interactively shape the perceived auditory and hap-1Sound Design Tools. Publicly available at for testing and integra-tion in sound design projects [3].2http://grrrr.org/ext/flext/3www.cycling74.com4www.puredata.org

tic material qualities of the ground. The project aims to re-alize an architectural interface for enaction in walking, asopposed to a wearable interface, such as a shoe. One advan-tage is that an interactive tile can be utilized for the design ofdiverse spaces, without requiring users to don special equip-ment to experience it.

Several research groups have developed instrumented floorsfor the sensing of walking, dancing, or running movements.For an overview, see chapter 2 of the book by Wanderleyand Miranda [13]. The PhOLIEMat developed by Cook [5]is somewhat unique as a floor-based device for the control ofsynthesized walking sounds via the feet (evoking the work ofthe Foley artist in film). To our knowledge no project has yetaddressed the design of a floor for interactive simulation ofsurface material properties in walking, whether through theauditory or haptic channel.

The prototype EcoTile (Figure 2) was designed to explorethe capability of an actuated but otherwise rigid interactiveplatform to interactively simulate and convey the experienceof walking on various materials. This aim was partly moti-vated by the notion that the auditory information alone is ca-pable of conveying considerable information about the prop-erties of surfaces that are walked upon. As a result, an evenmodestly successful haptic stimulus that is correlated withsuch an auditory signal may be enough for a convincing per-cept.

Figure 2. The interactive EcoTile with subject walking on it (top and

bottom), next to the software component running on a laptop computer.

The display of the latter shows the force, event density, and a sonogram

of the resulting audio/haptic signal.

In the prototype, the user walks on or otherwise interacts viathe feet with the tile, which interactively delivers a signaldesigned to mimic the sensation provided by a given surfacetype. The stimulus consists of a vibration transmitted to thefoot and simultaneously an acoustic signal transmitted to theear, but originating near the platform. For this prototype,a snowy surface was selected as phenomenologically inter-esting and as a material that might be readily identified by

3

25

Page 4: Auditory feedback through continuous control of crumpling ... · PDF fileAuditory feedback through continuous control of ... Crumpling sounds are informative of the macroscopic

Threshold

Control Parameter

vf (t)

Haptic Signal

Sound Signal

Force f(t)

Sensors in Tile

Control Mapping

Crumple Controller

Step Envelope Follower

Impact Synthesis

Model

Loud- speaker

Haptic Actuator

Low pass filter

Event,Velocity

Modes Freq., Decay

Finite-Energy

Map

Figure 3. Diagram of the footstep instrument, with force mapping (or-

ange), SDT crumpling model (blue), and finite energy map (yellow).

the user through interaction5. The device is pictured in theimages of Figure 2. The physical tile is a 34 cm by 34 cmvertically actuated platform. It uses a D-Box motion controlactuator and four Interlink force sensing resistors, with ac-companying electronics. The simulation is run on a singlecomputer in the software environment Max/MSP. Sound isreproduced by a powered loudspeaker located at the base ofthe tile.

Interactive auditory and haptic stimuli in EcoTile are gener-ated continuously in response to users’ footsteps. They aresynthesized in real time by a simulation of crumpling snowunderfoot, in a way that is driven by the force data from thefootstep. An overview of the approach is presented in Fig-ure 3. The method utilizes the crumpling and impact modelcomponents of the SDT described above. In order to sim-ulate the decay of the initial energy provided by the foot-step, the modal frequencies and decay times were controlledbased on the finite energy model described in [7], with theenergy level controlled by a simple two-state footstep enve-lope follower. Parameters of the synthesis model were tunedby hand to approximate walking on moderately well-packedsnow.

The net force signal f(t) provided by the EcoTile sensorswas not utilized directly, but a control parameter vf wasmapped onto the crumpling model’s density parameter, via:

vf ={ |df/dt| if df/dt < 0

0 else (1)

This choice represents a practical simplification in whichcrumpling occurs during walking only as weight is beingtransferred from the foot onto the tile, ignoring sensationsproduced as weight merely shifts from one part of the soleto another. The variation in force on the tile generated byweight shifts nonetheless contributes a qualitatively appro-priate crumpling, due to the fact that there is some modula-tion of the overall force on the tile as shifting occurs.

In the prototype, auditory and haptic stimuli are derived from5Audiovisual documentation can be viewed at http://cim.mcgill.ca/∼yon/HS-Video; naturally, it does not capturethe level of haptic interactivity.

the same instance of the crumpling model. Due to actua-tor limitations, the haptic signal lacks frequency componentsabove 100 Hz, whereas the human haptic channel is sensitiveto those up to 1000s of Hz. Based on feedback from users,the auditory signal with the limited haptic signal appearedsufficient to convey the impression of snow. Ten users wereasked to explore interaction with the device by stepping ontoit, in stride or in isolation, with one or both feet. Several ex-pressed surprise at the level of veridicality of the experienceof the virtual snow. We felt this could be attributed to thesynchronization between feedback channels, to the realismof the sound signal, and to the unexpected nature of interact-ing with virtual snow in a laboratory.

Clearly a more systematic assessment is needed. While thedevice was designed to aid the fusion of haptic and audi-tory signals into a single multisensory percept, the level ofpsychophysical consideration that has gone into the designhas so far been limited. Nonetheless, the degree of fusionachieved seems remarkable. As noted above, the synthesisparameters were tuned by hand to approximate well-packedsnow. This method is unsatisfactory to the extent that it re-quires an intuitive exploration of the large parameter space.

A larger and more modular floor is being designed and built,based on a lower cost, wider bandwidth vibrotactile actuator.In tandem with the design revision, a set of experiments isbeing undertaken by colleagues in psychology, to assess thesalience of each modality of the features of the correspond-ing signals in contributing to the perception of material prop-erties in walking. In addition, synchronized force, acoustic,and vibrotactile recordings are being collected from walkingon diverse surfaces. This data will be used to refine the con-trol parameter mapping used with the synthesis model, andmay allow the tile application to feed information back tothe synthesis algorithm design.

CRUMPLING SOUNDS IN A DINING SCENARIOIn the context of the EU CLOSED project, a particular inter-est is presently devoted to sonic interaction scenarios relatedto the kitchen and dining [3]. These scenarios have an ob-vious strong semantic value and provide convenient settingin which to evaluate functional, aesthetic and emotional as-pects of sound in the spirit of the CLOSED project. Recentlya specific workbench has been developed and tested, namelya dining table named Gamelunch [14]. The Gamelunch (thename is the fusion of the Balinese “Gamelan” orchestra andthe word “lunch”) is the place where a set of experienceshave been realized and will be tested, aiming at evaluatingthe role of the auditory aspects in the dining context (Figure4). No formal and rigorous evaluation experiment has beenperformed yet.

Nevertheless, the creation of a virtual (continuous) sonic feed-back, augmenting actions such as cutting a water melon orstirring a soup, established the intuitive relevance of the au-ditory feedback at the phenomenological level. Some eco-logical parameters were uncovered, while a coherence be-tween the action and the sonic feedback was maintained.

4

26

Page 5: Auditory feedback through continuous control of crumpling ... · PDF fileAuditory feedback through continuous control of ... Crumpling sounds are informative of the macroscopic

Figure 4. The Gamelunch.

As an example, the action of stirring a bowl of soup can pro-vide a coherent feedback in the sense that the sound followsthe dynamics of the movement of the spoon in the liquid. Atthe same time, the sound can be designed to convey the no-tion of a solid sandy material instead of a liquid one, whilemaintaining a correspondence between the embodied actionand a coherent virtual sonic response, even as the auditoryfeedback contradicts the haptic and visual feedback. Fromsuch a contradiction the unitary multisensory perception ofthe action is broken and the auditory feedback is instead iso-lated and placed in perspective. This seems an effective wayto induce the user to think about and realize the importanceof the sonic aspects in interaction with everyday artefacts.

In this spirit, several such sonic interaction examples wereimplemented. In one case, friction models for rubbing andsqueaking sounds were grafted onto cutting and lifting ac-tions in the interaction with fork and knife, respectively. Thecontinuous sound feedback clearly affected both the sensa-tion of the consistency of the food (a watermelon) and thesensation of the weight of the cutlery [14].

The simple (non-continuous) crumpling model has appearedto be extremely effective for a variety of cases involvingpowder-like sounds (the sandy-soup) or tearing sounds (thesticky fruit). However, the limitations noted above concern-ing the original, discretely-triggered version of the crum-pling sound are apparent. The new continuous crumplingmodel offers a more powerful tool and new opportunities toexpand and test interactive cases in which the auditory com-ponent is more stringent in the sense of a closed-loop andcontinuous feedback interaction.

In another case, a scenario surrounding the dressing of asalad was implemented. A bowl with a flexion sensor hiddenamong the leaves is sonified by means of a continuous crum-pling sound activated by the action of the two forks mixingthe salad. The system also records the quantity of action per-formed up to a certain time and varies the characteristics ofthe sound, which becomes gradually more fluid relative tothe dry granularity of the initial timbre. The auditory feed-back gives an instantaneous sense of the intensity of mixingaction. At a longer time-scale, it provides deeper sense ofthe mixing action and of the distribution of the dressing inthe salad thanks to the change in timbre of the sound.

AFFECTIVE FOOTSTEP SOUNDSIn this section we present the main design ideas of a recentstudy in which a model for the synthesis of natural footstepsounds was designed [6], and preliminarily assessed.

In an earlier model of natural walking and running footstepsounds [7], the pace of footsteps was controlled by tempocurves which were derived from studies in music perfor-mance, since strong similarities between walking/runningand music performance have been found in prior research. Alistening test for the validation of that model highlighted theway in which footstep sequences that were generated usingexpressive tempo curves, derived from music performance,were perceived as more natural by listeners compared to se-quences having a constant pace. Using this study as startingpoint, and in the context of the Affective Diary project [12],we have developed a model of footstep sounds for simulat-ing the presence of people walking in a virtual environment.The design choice was that the footstep sounds should com-municate the gender, age, weight, and emotional intention ofthe virtual walker.

We used the crumpling/impact sound model that is part ofthe SDT library, tuned to simulate different materials of theground. Gravel, dirt, soft wood, snow and grass were usedfor the crumpling sound model; rubber, glass, steel and woodfor the impact sound model. The timing in footstep sequenceswas controlled by using a footstep model developed aftermeasurements of real walkers, who were asked to walk withemotional intentions (happiness, sadness, fear and anger),as well as with their natural (neutral) pace. In interactivelistening tests, subjects could adjust pace and material to de-termine the gender of a virtual walker.

Preliminary results show that subjects associated both differ-ent pace and material to the two genders (Figure 5). Femalewalkers were identified by faster pace (the time interval be-tween to footsteps was about 0.5 s for females and 0.8 s formales), higher resonant frequency for impact sounds (glassand steel sounds for female; rubber and wood sounds formales) and for crumpling sounds (mainly gravel and snowsounds for females; dirt and soft wood sounds for males).

It was also tested how subjects would change the emotion offootstep sound sequences. Subjects could control the soundon a two-dimensional activity-valence space in which pacecharacteristics (regularity and timing) were changed dynam-ically. Results are promising despite the existence of someconfusion between angry and happy footstep sounds. Thisconfusion should be overcome by improving the continuouscontrol over the real-time change of the acoustical character-istics of the ground, thus allowing for a gradually changingperception of both the gender and emotional intention of avirtual walker. This idea is under test for presentation at theworkshop.

CONCLUSIONSControlling the rich spectrum of sounds that the crumplingmodel continuously changes in front of parameter variationsis the most difficult task that must be faced by this research.

5

27

Page 6: Auditory feedback through continuous control of crumpling ... · PDF fileAuditory feedback through continuous control of ... Crumpling sounds are informative of the macroscopic

Figure 5. Subjects’ percentage choices of different ground materials in

association to walker’s gender. The upper figure shows the choices for

impact sound models. The lower figure shows subjects’ preferences for

different tunings of the crumpling sound model.

At the same time, further explorations are needed using manyimpactor synthesizers simultaneously, generating more so-phisticated sound attributes from the materials involved inthe crumpling simulation. These two qualities must be ad-dressed in tandem, in order to perform experiments whoseoutcome may be more definitive than the incomplete andpreliminary results reported here. The research to date re-veals that crumpling sounds have the potential to become ef-fective conveyors of enactive knowledge in interactive con-texts simulating or augmenting everyday experience.

REFERENCES1. N. Armstrong. An Enactive Approach to Digital

Musical Instrument Design. PhD thesis, PrincetonUniversity, Princeton, NJ, 2006.

2. F. Avanzini and D. Rocchesso. Modeling collisionsounds: Non-linear contact force. In Proc. Conf. onDigital Audio Effects (DAFX-01), pages 61–66,Limerick, Ireland, December 2001.

3. CLOSED (Closing the Loop of Sound Evaluation andDesign). http://closed.ircam.fr.

4. P. Cook. Physically inspired sonic modeling (PhISM):Synthesis of percussive sounds. Computer MusicJournal, 21(3), 1997.

5. P. Cook. Modeling Bill’s gait: Analysis and parametricsynthesis of walking sounds. In Proc. 22 AES Conf. onVirtual, Synthetic and Entertainment Audio, Espoo,Finland, July 2002. AES.

6. A. DeWitt and R. Bresin. Sound design for affectiveinteraction. In Affective computing and intelligentinteraction ACII 2007, pages 523–533, 2007.

7. F. Fontana and R. Bresin. Physics-based soundsynthesis and control: crushing, walking and running

by crumpling sounds. In Proc. Colloquium on MusicalInformatics, pages 109–114, Florence, Italy, May 2003.

8. G. E. Frantti and L. A. Leverault. Auditorydiscrimination of seismic signals from earthquakes andexplosions. Bull. seism. Soc. Am., 55:1–25, 1965.

9. B. Giordano. Everyday listening: an annotatedbibliography. In D. Rocchesso and F. Fontana, editors,The Sounding Object, pages 1–16. Edizioni MondoEstremo, Florence, Italy, 2003.

10. B. Giordano and S. McAdams. Material identificationof real impact sounds: effects of size variation in steel,glass, wood and plexiglass plates. J. of the AcousticalSociety of America, 119(2):1171–1181, 2006.

11. G. Kramer. Auditory Display: Sonification,Audification, and Auditory Interfaces. Addison-Wesley,Reading, MA, 1994.

12. A. Lindstrom, a. Stahl, K. Hook, P. Sundstrom,J. Laaksolathi, M. Combetto, A. Taylor, and R. Bresin.Affective diary: designing for bodily expressivenessand self-reflection. In CHI ’06: CHI ’06 extendedabstracts on Human factors in computing systems,pages 1037–1042, New York (NY), 2006. ACM Press.

13. E. R. Miranda and M. M. Wanderley. New DigitalMusical Instruments: Control and Interaction Beyondthe Keyboard. A-R Editions, Middleton, WI, 2006.

14. S. Delle Monache, P. Polotti, S. Papetti, andD. Rocchesso. Gamelunch: A physics-based sonicdining table. In Proc. Int. Computer Music Conf.,Copenhagen, Denmark, August 2007.

15. S. O’Modhrain and G. Essl. PebbleBox andCrumbleBag: Tactile Interfaces for Granular Synthesis.In NIME 2004, pages 74–79, Hamamatsu, Japan, 2004.

16. A. Papoulis. Probability, Random Variables, andStochastic Processes. McGraw-Hill, New York, 1984.

17. L. Peltola, C. Erkut, P. Cook, and V. Valimaki.Synthesis of hand clapping sounds. IEEE Trans. onAudio, Speech and Language Processing,15(3):1021–1029, 2007.

18. C. Roads. Microsound. MIT Press, Cambridge (MA),2004.

19. M. Saeki. Analytical study of multi-particle damping.J. Sound and Vibration, 281:1133–1144, 2005.

20. M. R. Schroeder. Fractal, Chaos, Power Laws: Minutesfrom an Infinite Paradise. W.H. Freeman & Company,New York, NY, 1991.

21. J. P. Sethna, K. A. Dahmen, and C. R. Myers.Crackling noise. Nature, (410):242–250, March 2001.

22. W. H. Warren and R. R. Verbrugge. Auditoryperception of breaking and bouncing events: a casestudy in ecological acoustics. Journal of ExperimentalPsychology: Human Perception and Performance,10(5):704–712, 1984.

6

28