- computer music researchcmr.soc.plymouth.ac.uk/thinking_music-journal.pdf · peninsula arts with...

16
Friday 7 February – Sunday 9 February 2014 PENINSULA ARTS CONTEMPORARY MUSIC FESTIVAL 2014 THINKING MUSIC www.pacmf.co.uk PENINSULA ARTS WITH PLYMOUTH UNIVERSITY ICCMR WITH PLYMOUTH UNIVERSITY

Upload: haanh

Post on 30-Nov-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

Friday 7 February – Sunday 9 February 2014

PENINSULA ARTS CONTEMPORARY MUSIC FESTIVAL 2014

THINKING MUSICwww.pacmf.co.uk

PENINSULAARTSWITHPLYMOUTHUNIVERSITY

ICCMR WITH PLYMOUTH UNIVERSITY

THINKING MUSIC INTRODUCTIONPeninsula Arts Contemporary Music Festival is firmly establishing itself as an important platform in the UK for new music exploring ideas emerging from leading edge research that is helping to pave the way for the music of the future.

Composers that make history have always taken risks and produced works that divided the opinions of the audiences of their time. This year’s festival theme Thinking Music invites composers and performers to take risks, to venture into the unknown, and audiences to actively listen, open their minds and emotions to the unheard, and engage in the debate. Thinking Music is allied to the project ‘Brain‑Computer Music Interfacing for Monitoring and Inducing Affective States’ led by Prof Eduardo Miranda and Prof Slawomir Nasuto at the University of Reading’s Cybernetics Research Group, which is being funded by EPSRC. This year the Bergersen String Quartet will premiere Prof Miranda’s unprecedented new work, whereby brain signals from four people on stage will generate the parts to be performed by the quartet in real‑time. Thinking Music is also the title of Prof Miranda’s new book, which tells the inside story to the choral and orchestral work Sound to Sea. It describes in rich detail the concepts and processes with which the author engaged in the composition of the symphony. The book, which will be launched at the festival, includes the complete score and audio CD recording of the full live recording of the premiere at Plymouth’s Minster Church of St. Andrew by Ten Tors Orchestra with mezzo‑soprano Juliette Pochin conducted by Simon Ible.

The festival programme is a showcase for Plymouth University composers Alexis Kirke, Duncan Williams, David Bessell, David Strang, Mike McInerney, and John Matthias. This year’s guest composer is Lithuanian Linas Baltas who will make his UK premiere with a new work AIR for two string orchestras to be performed by the festival’s resident ensemble, Ten Tors Orchestra.

The Festival will open with a new work to respond Randall‑Page’s concurrent exhibition in both the Peninsula Arts Gallery and Plymouth City Museum & Art Gallery. Plymouth University composer Duncan William’s electronic piece Concord for Five Elements will be performed in the Peninsula Arts Gallery at the festival launch on 7 February. Peter Randall‑Page is a renowned international artist based in Devon. His sculptures are inspired by interdisciplinary themes and the exhibition will be his first major retrospective in the South West for 25 years.

FESTIVAL DIRECTORS

Eduardo R Miranda

Professor of Computer Music

Plymouth University

Simon Ible

Director of Music

Peninsula Arts

Plymouth University

ICCMRWITHPLYMOUTHUNIVERSITYComposers in attendanceIgnacio BrasaLinas BaltasSteve DavismoonAlexis KirkeEduardo R MirandaJohn MatthiasDavid Strang Martyn WareDuncan Williams Sean Williams

PerformersBergersen String QuartetThe Logothetis EnsembleTen Tors OrchestraJay AubornDunstan BelcherJoel EatonSimon IbleAlison KettlewellAlexis KirkeJohn MatthiasEduardo R MirandaJane PirieLauryna SableviciuteRodrigo SchrammDavid StrangFederico VisiMartyn WareSean Williams

BOX OFFICE

Peninsula ArtsPlymouth UniversityRoland Levinsky BuildingDrake CircusPlymouth PL4 8AA

Telephone: 01752 58 50 50Email: [email protected]: www.peninsula-arts.co.uk

Peninsula Arts operates within the Faculty of Arts and serves as the arts and culture organisation for Plymouth University. The year‑round programme includes exhibitions, music, film, public lectures theatre and dance/performance. One of the its principle aims is to provide access to a programme of wide‑ranging high quality arts and cultural experiences, which is informed by expertise, research and scholarship of the University and its partners, to the communities of Plymouth, the South West and visitors to the region.

BERGERSEN STRING QUARTET

Peninsula Arts Contemporary Music Festival is promoted in partnership with Plymouth University’s Interdisciplinary Centre for Computer Music Research

CONCORD FOR FIVE ELEMENTSBy Duncan Williams

What sound do six‑tonne ‘rocks’ make?

When you first catch sight of Peter Randall‑Page’s enormous platonic sculptures, it is impossible not to wonder at the sweeping figures in the Italian marble, generated by many thousands of years of intense pressure. The sculptures themselves, hand crafted by the artist into the classical geometric puzzles we now see in the work invite further questions about their own idiosyncrasies and commonalities: If these giant shapes could talk, what would they say to one another? How different would their voices be? And, could they see past their differences to reach a concord? These are the driving inspirations behind Concord for Five Elements.

Each sculpture is voiced by a natural sound derived from Plato’s Timaeus (fire with the tetrahedron, earth the cube, air the octahedron, space the dodecahedron, and water the icosahedron). Source material for fire, air, and water was recorded in and around Dartmoor, the home of Randall‑Page’s studio. Earth was sourced from another location recording of the crunching of footsteps on the moor. Perhaps (to modern ears at least) the most cryptic element, Space, had a source derived from the reverberant properties of each of the other elements combined. Each of these natural sounds was then processed with custom signal processing routines developed in Plymouth at the Interdisciplinary Centre for Computer Music Research to produce distinctive timbres with harmonic ratios that are directly informed by the ratio of vertices and indices in the six tonne marble shapes.

The musical structure for the piece is then developed from five ‘cells’, each cell representing one of the sculptures. The pitch and rhythm notated in each cell is again derived from the angles and vertices of the sculptures associated with each element, with the exception of Space, which gives a long, reverberant drone. In order to reflect the human element of the sculptures, the cells are then placed in the timeline by hand, to gradually develop a conversation between the elements’ voices.

Finally, a novel signal processing technique known as timbre morphing is applied to all the elements as the piece progresses. This is the first time timbre morphing has been used in a musical piece, having been adopted as a theoretical term, and prototyped in the signal processing domain in the last five years. In the timbre morphing routine, the harmonic content of each elements voice (in other words, not the rhythmic or pitch content), gradually moves closer to the harmonic content of the next element. This gives a process a bit like humans carrying out impressions of one another’s accents and dialects. Hence, in the conclusion to Concord we hear each of the musical cells overlaid, with a new timbre from each element; the elements have agreed.

Related Event

Computer Music Performance: Concord for Five Elements 19:00 Friday 7 February and 17:30 Saturday 8 February, Peninsula Arts Gallery, Roland Levinsky Building, Plymouth University

DUNCAN WILLIAMS

SINGING THE MEDICATION LABELAn Artistic experiment in helping people with Dementia remember new information using computer-generated music

By Simon Ible

People with dementia conditions can find it more and more difficult to form new memories as their symptoms worsen. However even those with more extreme symptoms are often able to recall music from their past and sing along. The video documentary Alive Inside shows people who are almost catatonic with Alzheimer’s seem to ‘come back to life’ briefly when played their favourite music on an iPod. These facts inspired composer Alexis Kirke to collaborate with people with dementia to see if they could recall important facts such as their home address, their medication names, and their carer names and locations, when these were set to memorable music. I recently met with the composer to learn more about his musical project.

Kirke, a member of the Interdisciplinary Centre for Computer Music Research at Plymouth University, has developed a prototype for a simple computer music algorithm. It is based on the principles behind the “ABC song” used by children to learn the alphabet and jingles often heard on the radio to encourage people remember phone numbers. The computer is provided with some written phrases which the person with dementia wants to memorise. Examples might be the address “Number 900, Mount Georges Road” or medication instructions “Aricept 11am”. From the phrases it endeavours to generate short tunes to be sung with the lyrics which are the words to be memorised. For example, speak the phrase “Number 900, Mount Georges Road” out loud to yourself. The computer system attempts to generate a set of musical notes with rhythms similar to those spoken rhythms, but with a catchy tune overlaid. The person with dementia can – with the help of their carer – listen to this tune multiple times, hum along and then practice singing it back with the address “Number 900, Mount Georges Road” over the top.

“For a given phrase, it normally only takes a few repeat runs of the algorithm and then I’ll hear a jingle that I like,” says Kirke. “This prototype gives a good first approximation of the rhythms of the text – which can easily be tweaked by me if I want, becoming a kind of man/machine jingle collaboration! However I would expect more advanced versions of such an algorithm would be able to remove the human from the process.”

Although this is an artistic rather than a scientific experiment, Kirke wants to see if these tunes can embed themselves into the minds of the people with dementia – like musical jingles you can’t get out of your head – and enable them to recall the verbal information in future, perhaps by hearing it internally or humming the start of it to themselves.

As well as collecting memory tunes generated with collaborators, Kirke will ask the people with dementia what their favourite tunes from their past are, and collect together a number of those remembered melodies.

These memory tunes and melodies will then be used to construct Remember a Day, the 15 minute composition for cello, mezzo‑soprano and electronics in three parts, premiering at Peninsula Arts Contemporary Music Festival 2014. “The performance will in fact be in the form of a musical ‘embedding’ session itself,” he explains. “The three movements will actually be very repetitive, so as to maximize the embedding of the tunes for any of my collaborators attending who have dementia. They will of course all be invited, and to rehearsals as well.”

The premiere will be introduced by Ian Sherriff, a trustee of the Alzheimer’s Society, a member of the Dementia Friendly Communities Champion Group which reports directly to the Prime Minister, and chair of Plymouth University Dementia Group. On the day after this premiere Ian and Alexis will give a public festival talk on the development and composition of Remember a Day.

“Thanks to Ian and Plymouth University Dementia Group I’ve been able to spend time learning from experts on helping people with Dementia,” says Kirke. “This has included attending Singing for the Brain sessions and support groups.” In fact he found that at the Plymouth Singing for the Brain Sessions, run by musical director Sarah Chapman, music was used to help attendees with remembering the ordering of the days of the week. “She also encourages carers to sing shopping lists. I only found these things out after I’d had the idea for the piece, so it was very exciting to see how useful it could be.”

Kirke foresees more catchy and accurate computer systems which could perhaps be linked to alarms on phones or smart watches. “Play the start of a familiar tune to someone, and the rest of it normally carries on in their head. Also, imagine a ring‑tone that reminds the person with dementia of the name of the person calling them.” So as well as providing engagement opportunities and new inspiration to his collaborators with dementia, Kirke hopes this composition may inspire research into this potentially fruitful area.

Remember a Day is made possible through the support of Peninsula Arts, the Interdisciplinary Centre for Computer Music Research, Plymouth University Dementia Group, and the Alzheimer’s Society.

Related Events

Chamber Recital: Remember A Day for solo voice, cello and electronics with mezzo‑soprano Alison Kettlewell and Jane Pirie, cello 20:00 Friday 7 February, Sherwell Centre, Plymouth University

Festival Talk: Alexis Kirke and Ian Sheriff: The development and composition of Remember A Day 15:00 Saturday 8 February, Lecture Theatre 2, Roland Levinsky Building, Plymouth University

ALEXIS KIRKE

VENTURING INTO THE UNKNOWN, CREATING THE FUTURE: MUSIC, AI AND NEUROTECHNOLOGYBy Prof Eduardo Reck Miranda

I am a composer with an avid interest in harnessing the intelligence of computers to aid my creative practice. However, make no mistake, I am not interested in replacing human musicians by computers. Rather, I am interested in systems that boost human creativity. To this end, nearly a decade ago I founded the Interdisciplinary Centre for Computer Music Research at Plymouth University in the UK, which is pioneering work into an emerging new field, which I have coined as Music Neurotechnology. Soon after, Simon Ible and I created this festival, which has become an important venue for sharing our work with the public.

The practice of using computers to generate music has evolved in tandem with the development of computing technology. Computers have been programmed to generate music as early as the beginning of the 1950’s, pre‑dating the appearance of Artificial Intelligence as a field of research. For instance, the piece Illiac Suite for String Quartet, composed in the USA in late 1950’s by Lejaren Hiller (composer) and Leonard Isaacson (mathematician), is often cited as the first piece of music involving materials generated by a computer. Nowadays, the computer is ubiquitous in many aspects of music, ranging from software for music production in the studio, to systems for distribution of music on the Internet.

Artificial Intelligence, or AI, can be generally defined as a scientific field looking into ways of endowing machines with some form of intelligence. John McCarthy, who allegedly coined the term in the mid 1950s, defined it as the science and engineering of making intelligent machines. Since then, the field has evolved considerably and much progress has been made: from humanoid robots to software that can compose decent music automatically. AI still steers controversy and disagreement among scientists, some of whom have even proclaimed that the field of AI is dead. I do not necessarily agree. The problem is that the very notion of intelligence can be approached from different angles. Here I introduce two examples of how our research into Music Neurotechnology can foster an interdisciplinary approach to study music, and indeed intelligence.

Firstly, can you imagine playing a musical instrument endowed with a living brain? Living musical instruments? What would they sound like? How would one play them?

New computational paradigms based on and/or inspired by the principles of information processing in physical, chemical and biological systems are promising new venues for the development of new types of intelligent machines. New types of computers are bound to emerge this century, which might significantly change the way we think of them – and their role in the future of music will be no exception. It has been reported that machines steered by chemical reactions have been capable of performing computational

tasks such as the design of logical circuits. There has been a growing interest in research into the development of neurochips for computations; these microchips coupling living brain cells and silicon circuits together. The ambition here is to harness the intricate dynamics of in vitro neuronal networks to perform computations.

As far as music is concerned, the dynamics of in vitro neuronal networks represent a source of rich temporal behaviour and we are developing methods for rendering this behaviour into sound. My collaborators and I have developed and tested a number of rendering methods using different sound synthesis techniques; please refer to a paper entitled “Computer Music Meets Unconventional Computing: Towards Sound Synthesis with In Vitro Neuronal Networks” which we published in Computer Music Journal, Vol. 33, No. 1, pp. 9‑18. We are currently studying how to exert controllability and repeatability in such systems, with a view to developing in vitro neural networks technology to build active musical instruments, as opposed to traditional musical instrument, which are by definition passive. We will not know how to play this instrument until we have built it. However, I have a pretty good hunch of how they might sound; and it is nothing like any existing acoustic musical instrument.

Now, just imagine if you could tap into signals detected directly from your own brain to generate the score for a string quartet to play your music on the fly. Science fiction? Read on.

The idea of controlling musical instruments or a system that generates music with brain signals has been around since the 1960s. For instance, in the mid 1960s composer Alvin Lucier composed a piece of music called Music for Solo Performer, which involved percussion instruments played by the resonance of his electroencephalogram, or EEG. The electroencephalogram, abbreviated as EEG, is a measurement of the brainwaves detected using electrodes placed directly on the scalp. Technically, EEG is measured as the voltage difference between two or more electrodes on the surface of the scalp, one of which is taken as a reference. The EEG expresses the overall activity of millions of neurons in the brain in terms of charge movement, but the electrodes can detect this only in the most superficial regions of the cerebral cortex. He placed electrodes on his own scalp, amplified the signals, and relayed them through loudspeakers that were placed near to percussion instruments, such as gongs, cymbals, timpani, snare drums, etc. The vibrations of EEG were relayed by the loudspeakers and set the instruments into vibration. Nowadays, we are beginning to witness the emergence of technology that is enabling the design of systems for the active control of sound and music with the EEG, as opposed to simply converting EEG vibrations into sound.

EDUARDO R MIRANDA

I am interested in developing brain‑computer music interfacing technology, or BCMI, aimed at special needs and music therapy, in particular for people with severe physical disability. The electroencephalogram, abbreviated as EEG, is a measurement of the brainwaves detected using electrodes placed directly on the scalp. Technically, EEG is measured as the voltage difference between two or more electrodes on the surface of the scalp, one of which is taken as a reference. The EEG expresses the overall activity of millions of neurons in the brain in terms of charge movement, but the electrodes can detect this only in the most superficial regions of the cerebral cortex. This research is motivated by the extremely limited opportunities for active participation in music making available for people with severe physical disability, despite advances in technology. For example, severe brain injury, spinal cord injury and locked‑in syndrome result in weak, minimal or no active movement, which therefore prevent the use of gesture‑based devices. These patient groups are currently either excluded from music recreation and therapy, or are left to engage in a less active manner through listening only.

The EEG expresses the overall activity of millions of neurons in the brain in terms of electricity. However, the electrodes can detect this only superficially. This is a difficult signal to handle because it is extremely faint, and is filtered by the skull and the scalp. This signal needs to be amplified significantly and analyzed in order to be of any use for a brain‑computer interface.

My collaborators and I have recently developed a prototype BCMI, which we tested with a locked‑in syndrome patient after a severe stroke at the Royal Hospital for Neuro‑disability, in London. More details about this work can be found in the paper entitled “Brain‑Computer Music Interfacing (BCMI): From Basic Research to the Real World of Special Needs”, published in the journal Music and Medicine, Vol. 3, No. 3, pp. 134‑140. Our BCMI is based on a neurological phenomenon known as visually evoked potentials. When a person looks at different flashing lights at specific frequencies, this shows up in their EEG, and a computer can be programmed to infer which of the flashing lights they are staring at. We created musical algorithms that translate specific EEG signals associated with different flashing icons on a computer screen into distinct musical processes. Looking at one icon would sound a certain note, looking at another would produce a certain rhythm, staring at another would change its pitch, and so on. This is exactly the same technology that my PhD student and enduring collaborator, Joel Eaton, and I designed for my piece Activating Memory. This is an innovative experimental composition for 8 performers: a string quartet and BCMI‑quartet. The BCMI‑quartet involves four persons wearing a brain cap furnished with electrodes to read information from the brain. During the performance, the BCI‑quartet generates musical scores to be performed by the string quartet in real‑time. Each member of the BCI‑quartet generates a part for a musician of the string quartet. With the kind support of g.tec (a manufacturer of biomedical technology based in Austria) and funding

from EPSRC, we designed an extraordinary machine that uses brain information to control a system that produces musical scores. This is the first time ever that such a (not so) futuristic system will be revealed to the public.

The field of Music Neurotechnology is still in its infancy. But it is already bearing fruits, as it can be seen this 2014 edition of Peninsula Arts Contemporary Music Festival. One thing that is clear to me, however, is that McCarthy’s notion of AI “as the science and engineering of making intelligent machines” should probably evolve to AI “as the science and engineering of making machines that harness our human intelligence.”

In the meantime, in tandem with ICCMR’s forays into music, AI and neurotechnology, my continuing interest in pushing the boundaries of music is epitomized by Sounds from Underground, for prepared piano and strings, composed in collaboration with pianist Luciane Cardassi during a residency at Banff Centre in Canada last year. I created a unique sonic vocabulary for this piece by exploring unusual ways of playing the instruments. Whereas some notes of the piano are muffled with earplugs inserted between the strings inside the instrument, other notes are played by means of magnetic resonators. I teamed up with engineer and composer Andrew McPherson, to build an array of computer‑controlled electromagnets that are placed inside of the piano to vibrate the strings. The electromagnets vibrate the strings of the piano independently of the hammers, affording new and yet beautiful sonorities on the piano, all controlled during the performance via a bespoke piece of software.

Related Events

Book Launch: Thinking Music 18:30 Saturday 8 February, Lecture Theatre 2, Roland Levinsky Building, Plymouth University

Concert: Sounds from Underground for prepared piano, computer controlled electromagnets and strings

Anathema for piano and string orchestra Ten Tors Orchestra with Lauryna Sableviciute, piano conducted by Simon Ible 19:30 Saturday 8 February, Theatre 1, Roland Levinsky Building, Plymouth University

Chamber Concert: Activating Memory for string quartet and brain-computer interface Bergersen String Quartet 14:30 Sunday 9 February, Sherwell Centre, Plymouth University

AIRQ & A with this year’s 2014 Peninsula Arts Contemporary Music Festival guest composer, Lithuanian, Linas Baltas

By Simon Ible

S.I. Can you begin by describing the structure you have used in your new work Air?

L.B. Air is composed for two string orchestras and is my first binary composition. The first Binary compositions in Europe were ritual music used mostly by Pagans. Much of this music disappeared as Christianity spread through Europe. However, you can still find examples of binary composition in Lithuanian folk songs Sutartines. These simple polyphonic songs are still sung today. Similar binary principles were also used in ancient Greek songs. For example The Delphic Hymns to Apollon.

You can find Binary composition examples in compositions of Debussy, Bartok and Stravinsky. I love the way they used and developed binary composition technique.

S.I. How did you come up with the idea for using two string orchestras?

L.B. My initial idea of the composition Air was very simple. I had decided to create two separate pieces, which performed at the same time will also make two completely different compositions. The two orchestras represent the two principal elements that make up Air: Nitrogen and Oxygen. My Initial concept of 50:50 changed in to different proportion of Nitrogen 78% and Oxygen 21% percentage in the Air as the instrumentation for my composition.

S.I. How is this structure used in the tonality of the work?

L.B. I started by looking at the molecular structures of Nitrogen and Oxygen and used these to identify fundamental tones which I found by sketching simple diagrams using the molecular structures.

After finding fundamental tones I used logarithms to find my melodic lines. After developing the melodic lines I doubled these musical phrases in the bass part. And to my surprise I have realised that bass part melody became very similar to the bass part of J. S. Bach Orchestral suite No. 3 in D major – the second movement Air. That was completely unexpected but also strangely reassuring.

S.I. Your tempo for Air remains the same throughout. I have noticed you treat music tempi similarly in some of your other works. How deliberate is this?

L.B. Yes, in the vast majority of my one movement compositions I do not change time signature and tempo. I have used the same principal for Air, too. I have achieved dynamic development by changing rhythm and articulation. I entirely agree with the composer L. Andriessen, who once said: “If one can change tempo and time signatures in composition by using and developing Rhythmic phrases etc., … then there is no need to change time or tempo signatures itself.”

S.I. You have however made distinct divisions within the structure

L.B. That’s correct. Air is one part a polystylistic composition with three very well defined divisions. In all three divisions I have used and changed rhythmic structures and have dramatically changed the “outfit” of the composition.

The structural part of the composition and the music material realisation is very important to me, but most important of all is the final sound or in other words the acoustic phenomenon of the piece. This final goal does allow me to discard any parts of the structural conception, in some parts of the piece, which would not lead to perfect sound. I am ready to sacrifice concept over sound. Music shouldn’t only be generated by mind developing ideas and following structural models. It has to have some intuitive, more sensitive origins, which makes music less scientific and more understandable. That is my why music has a responsibility to music lover coming to listen to my music, to make sure it is not only conceptually interesting or structurally new, but also enjoyable.

S.I. So do you have an overall vision for Air?

L.B. During the process of creating Air I had two structural models of Nitrogen and Oxygen in my mind and they somehow translated in to clear picture, a visualisation of the Air: Oxygen being panoramic 360° view of clear fresh air and picture turning around at a steady speed three times during the composition, while Nitrogen I visualised as a an object flying in the air at changing speed, in a controlled manner.

I am not sure if others will see the same vision I had, but I hope that this will help the music lover to better understand Air. The most difficult and at the same time most satisfying part of creative process was joining Oxygen and Nitrogen together. At first it seemed to be mission impossible, but when I felt that I had got it right and it sounds as it was meant to, I was more than satisfied. I was really happy with what I saw and heard.

S.I. So a final word from you about your new music?

L.B. Air is light and fresh. Two orchestras will make the Air. Take a breath and enjoy!

Related Event

Concert: Air for two string orchestras Ten Tors Orchestra conducted by Simon Ible 19:30 Saturday 8 February, Theatre 1, Roland Levinsky Building, Plymouth University

LINAS BALTAS

IMPRINTBy David Bessell

My new piece, Imprint explores the idea of imprinting taken from the premise that memory acts as an imprint of past experience on the structure of the brain. This is explored via various processes which together form the structure and harmonic language of the piece.

The piece falls into a style of composition called ‘spectral music’ pioneered by the French composer Gérard Grisey. Reacting against some of the more mathematical and abstract excesses of high Twentieth Century Modernism, Grisey developed a compositional technique founded on the fundamental properties of the sounds themselves. It is this conceptual basis which Imprint takes its starting point.

All sounds can be broken down into a series of component pitches which are known as harmonics or overtones. These component pitches are not normally heard as separate notes but are rather blended together by the perceptual process of hearing into the characteristic timbre of the instrument. Measurements taken from computer analysis of these overtones form the basis for the actual notes used in Imprint.

The harmonic material is derived from computer spectral analysis of the digital convolution of pairs of trombone notes. When two notes are combined in the computer through the convolution process what results is only those harmonics that are common to the two notes. In other words what we are left with is the memory or ‘Imprint’ of one note on the other. Measurements taken from this Imprint are then used to derive the harmonic and melodic aspects of the piece.

The second main aspect of imprinting in this piece is obtained by processing in real time the sound of the four trombones through a ring modulator. This takes the sonic signature of one sound and compares it with another thus producing a characteristic imprint which consists of sum and difference frequencies related to the pitches of the original sounds. These additional tones are used throughout the piece to produce a variety of subtly blended textural effects.’

On an emotional level the artistic result is tense, dark and foreboding.

DAVID BESSELL

https://sites.google.com/site/davebessellmusic/home

Related Event

Concert: Imprint for strings, four trombones and ring modulator Ten Tors Orchestra conducted by Simon Ible 19:30 Saturday 8 February, Theatre 1, Roland Levinsky Building, Plymouth University

PERFORMING IMAGINED NOISES: THE LOGOTHETIS PROJECTBy Michael McInerney

One of the challenges for contemporary composers working with the expanded soundworlds made available through technology is that of notation: how to imagine the sounds that we want to hear and represent them for interpreting performers to realise. This challenge is not new, and the Viennese composer Anestis Logothetis (1921 – 1994) developed a distinct approach to both notation and interpretation that allowed him to work with a broad palette of imagined sounds.

Logothetis seems to have been convinced that that the Western tradition of interpretation – in which a group of performers gather in rehearsal around a composer’s score to prepare a performance that realises the composer’s vision – is in some sense a cultural and artistic cornerstone. Though he was also a composer of music for tape and electronic resources, the heart of his oeuvre is a collection of more than a hundred beautifully crafted scores for electro‑acoustic ensemble that are just that – no more than pages of musical notation. His new system of notation can, in theory, represent any imaginable sound. The problem is noise, not only the acoustic variety, but also as defined by information theory – noise as that information‑rich signal, whose content of meaning is undecidable. The more complex the sounds, it seems, the more notation struggles to keep up with them.

Undaunted, Logothetis continued in his pursuit. In order to maintain the traditional rehearsal process Logothetis abandoned the other central plank of the concert tradition – the preciousness of the ‘composer’s idea’. Once that is ditched and the composer accepts that the resulting performance also belongs in some sense to the performers then the problem of transmitted meaning becomes less significant.

It is this radical approach to interpretation and notation that makes Logothetis’ scores important to interpreting performers working with technology today. These scores, by indicating sound shapes, sound types and families of pitches but not how the sounds should be produced, have remained (as the composer intended) open to developments in technology.

The Logothetis Ensemble is a quartet of composer/performers who came to the same dilemma about noise and notation but from the other side – discovering that the expanded soundworlds of their creative practice squeezed into traditional notation with considerable difficulty. There is enough information in one of Logothetis’ scores for painstaking rehearsal to be possible, as we apply our shared basket of skills and resources to meeting the double challenge of realising Logothetis’ illustration as faithfully as possible while keeping an ear to the final sounding result that the audience will hear. We project the scores that we have worked from onto the screen behind us so that the audience too may enjoy the mimetic play between vision and sound implicit in them.

Two of the works in the concert for the Plymouth Contemporary Music Festival are our own realisations of single‑page scores by Anestis Logothetis: Ghia Tin Ora (1975) and Enklaven (1967). Each score represents the challenge of visualising a musical ‘work’ slightly differently. Ghia Tin Ora (‘For the hour’) is the more obvious journey through a sequence of sounds, though even here there are moments in which we as interpreters have had to decide when parts are parallel, sequential or overlapping. The form and character of Enklaven is explained by its title: a family of sonic enclaves, each performed more or less continuously during a fixed timebracket within the overall work, embedded within a larger holding texture that runs throughout. In our realisation this ‘back‑sheet’ was pre‑composed as a gradually evolving diffused soundshape that comes straight from the computer. The piece between these two represents something of a find: recently drawn from an archive in Vienna by the musicologist Tomas Gorbach, Fantasmata (1960) is a very early tape piece that has been arranged for live diffusion for this concert by members of the ensemble.

Related Event

Performance: The Logothetis Ensemble Richard Douglas‑Green, Michael McInerney and Michael Neil 18:00 Sunday 9 February, Theatre 1, Roland Levinsky Building, Plymouth University

THE LOGOTHETIS ENSEMBLE

ICCMR WITH PLYMOUTH UNIVERSITY

Box OfficePeninsula ArtsPlymouth UniversityDrake CircusPlymouth PL4 8AA T: 01752 585050E: [email protected]

Buy tickets online www.peninsula-arts.co.uk

Peninsula Arts Contemporary Music Festival is promoted in partnership with Plymouth University’s Interdisciplinary Centre for Computer Music Research (ICCMR).

Programme Overview:PENINSULA ARTS CONTEMPORARY MUSIC FESTIVAL 2014THINKING MUSICFRIDAY 7 FEBRUARYCrosspoint, Roland Levinsky Building Festival LaunchFestival Directors’ Simon Ible & Eduardo Miranda introduce this year’s festival themes and performances

Crosspoint, Roland Levinsky BuildingComputer music performanceDuncan Williams: Concord for Five Elements (2014)

Upper Lecture Theatre, Sherwell Centre Chamber RecitalAlexis Kirke: Remember a Day for solo voice, cello and electronics (2014)Martyn Ware: Recapture (2014)

SATURDAY 8 FEBRUARYLecture Theatre 2, Roland Levinsky BuildingAlexis Kirke and Ian Sherriff: Public festival talk on the development and composition of Remember a Day (2014)

Crosspoint, Roland Levinsky Building David Strang and Sean Williams: Light Entropy (2014)

Lecture Theatre 2, Roland Levinsky BuildingBook LaunchEduardo Reck Miranda: Thinking Music (2014)

Theatre 1, Roland Levinsky Building Concert Ten Tors Orchestra with Lauryna Sableviciute, piano conducted by Simon Ible Eduardo R Miranda: Sounds from Underground for prepared piano, computer‑controlled electromagnets and strings (2014) Eduardo R Miranda: Anathema for piano and string orchestra (2012 – 2014) Linas Baltas: AIR for two string orchestras (2014)David Bessell: Imprint for strings, four trombones and ring modulator (2014)Ignacio Brasa: Zart for four trombones (2014)

SUNDAY 9 FEBRUARYPerformanceCrosspoint, Roland Levinsky Building David Strang and Sean Williams: Light Entropy (2014)

Upper Lecture Theatre, Sherwell Centre Chamber Concert Bergersen String Quartet with Lauryna Sableviciute, pianoEduardo R Miranda: Activating Memory for string quartet and brain‑computer interface quartet (2014)Stephen Davismoon: Glory Streams for piano and string quartet (2009)Eliot Short The Golden Road (2012))Iancu Dumitrescu Alternances (1967)

Theatre 1, Roland Levinsky Building Chamber ConcertThe Logothetis Ensemble: Performing Imagined Noises

Theatre 1, Roland Levinsky Building ConcertJohn Matthias: Geisterfahrer (2013)