a survey of audio-related knowledge amongst software...

31
A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE ENGINEERS DEVELOPING HUMAN-COMPUTER INTERFACES Joanna Lumsden & Stephen Brewster September 2001 TR – 2001 – 97 http://www.dcs.gla.ac.uk/research/audio_toolkit/

Upload: others

Post on 10-Jul-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE

ENGINEERS DEVELOPING HUMAN-COMPUTER INTERFACES

Joanna Lumsden & Stephen Brewster

September 2001

TR – 2001 – 97

http://www.dcs.gla.ac.uk/research/audio_toolkit/

Page 2: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE

ENGINEERS DEVELOPING HUMAN-COMPUTER INTERFACES

Joanna Lumsden & Stephen Brewster

Department of Computing Science University of Glasgow 17 Lilybank Gardens Glasgow G12 8RZ

E-mail: {jo,stephen}@dcs.gla.ac.uk

September 2001

1. INTRODUCTION Due, in part, to the substantially rising popularity and power of small screen mobile devices such as WAP mobile phones and PDAs (I.D.G., 2000, Computing Market Dynamics, 2000) in which the visual presentational resource is significantly less than that of typical desktop computers, recent years have seen increased research effort with respect to the use of audio feedback within application user interfaces (Alty, 1995, Beaudouin-Lafon and Conversy, 1996, Brewster, 1997, Brewster, 1998, Brewster and Crease, 1997). That said, however, despite the fact that sound is part of our everyday lives and has been shown to be beneficial when used to enhance both graphical and non-visual human-computer interfaces (Brewster and Crease, 1999, Brewster et al, 1998, Crease and Brewster, 1998, Crease and Brewster, 1999), it is as yet greatly under-exploited in commercial human-computer interfaces1 as an effective means of communicating information to users. As part of its research, the Audio Toolkit Project has developed a toolkit to support the use of audio-enhanced Java™ widgets (Crease et al., 2000a, Crease et al., 1999, Crease et al., 2000b) together with guidelines for the design of sounds for use within human-computer interfaces2. In order that the Audio Toolkit and associated guidelines may be most effectively accessed and thereby used by software engineers - and in particular user interface developers - a survey of audio-related knowledge (and indeed the need for this knowledge) amongst software engineers was undertaken. Given the general lack of audio-enhancement in current user interface designs both for mobile and non-mobile applications, it was hypothesised that user interface developers typically have little or no experience of creating designs that include sound beyond the 'beeps' commonly used for standard system level errors. It was also hypothesised that the level of technical acoustic knowledge or expertise amongst user interface developers in general is low. Additionally, it was theorised that developers' level of knowledge of acoustics and/or music would be reflected in the user interfaces they typically develop (or vice versa). These hypotheses are important to the further development of the Audio Toolkit since it is felt that developers' level of technical knowledge or expertise in the field of acoustics and/or music will affect their understanding of the Audio Toolkit and associated guidelines and, in turn, their ability to use and apply these facilities. This report presents the results of a survey which was conducted in order to investigate: (1) software engineers' extent of user interface design and development experience for both mobile and non-mobile devices; (2) the level of acoustic knowledge typically held by software engineers (especially those developing audio-enhanced user interfaces); and (3) the general level of technical knowledge of music amongst software engineers. The survey began by investigating whether software engineers typically develop user interfaces for applications designed to run on either mobile or non-mobile devices (or whether they interchangeably 1 Excluding the computer games market. 2 For more information about the Audio Toolkit, contact one of the authors or visit our web site: www.dcs.gla.ac.uk/research/audio_toolkit

page 2

Page 3: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

undertake user interface design and development projects for both device types) and the extent to which these user interfaces are enhanced with audio-feedback. In so doing, it gauged software engineers' self-assessed level of expertise in this respect. Having determined software engineers' experience of user interface design and development, the survey examined their level of technical knowledge/understanding of both acoustics and music in order to: (1) determine what assumptions (if any) may be made about prior levels of understanding in this domain amongst software engineers; and (2) identify any relationships between software engineers' level of knowledge/understanding of acoustics/music and their practice and expertise in user interface development. Together, the results of this survey provide an albeit rough characterisation of the 'typical' consumer for the Audio Toolkit and accompanying guidelines such that supporting documentation can be correctly pitched and can plug any gaps in knowledge necessary for their successful and efficient use. In order to gauge public opinion, the survey asked respondents to comment on their views as to the use of audio feedback within human-computer interfaces and to identify questions they would like answered with respect to the use of audio feedback in general. 2. CONDUCTING THE SURVEY The survey was conducted by means of a questionnaire (see Appendix A) which was circulated via a selection of professional/conference body discussion groups/emailing lists including: the GIST research group; the BCS HCI professional body; the First Tuesday Glasgow professional discussion forum; the CHI forum; the Association for Software Design (ASD) discussion forum; and the Computer-Human Interaction Specialist Interest Group (CHISIG). The questionnaire was also circulated amongst software developers within the Department of Computing Science at the University of Glasgow. These particular professional bodies were targeted on account of the representativeness of their members in relation to the intended user group for the Audio Toolkit. The questionnaire was distributed in a variety of formats to accommodate variation in facilities at respondents’ sites. The formats included: word and rich text format (RTF) documents containing protected template tables; hard copy paper versions (within the Department of Computing Science); and protected Microsoft Excel spreadsheets. Respondents were asked to complete the questionnaire using which ever format was most convenient and were given both postal and e-mail addresses by which to return their completed questionnaires. The respondents were given a deadline – approximately 6 weeks after the circulation date - by which to have returned their questionnaires; at this point, a total of twenty six completed questionnaires were returned. Given the means by which the questionnaires were distributed, it is impossible to determine the percentage return rate for this survey. It is recognised that questionnaires are a convenient, but by no means ideal, mechanism by which to collect information. Often the questions are not answered as intended, and return rates – as in this case – can be poor. In order to avoid as many as possible of the known and avoidable hurdles associated with questionnaire deployment, some trial completions of the questionnaire were conducted. The trial respondents gave valuable direct feedback regarding the questionnaire which was incorporated into the final version before distribution. 3. THE RESULTS The following sections present an analysis of the findings obtained as a result of conducting this survey. The discussion deals with each of the areas investigated in turn and, where applicable, correlates the findings in order to highlight patterns within the data. Given the relatively small sample upon which these findings are based, it is not possible to attribute any meaningful statistical significance to the results. As such, the findings are reported from an observational or qualitative standpoint. That said, given the absence of previous (and perhaps more substantial) investigations of this nature, the results presented here can be viewed as a placeholder for understanding about the current state of acoustic/musical knowledge amongst software user interface developers and the influence this knowledge has on the developers’ perceived expertise in the field of both mobile and non-mobile application user interface development (and in particular the audio-enhancement of such interfaces).

page 3

Page 4: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

The following discussion should be considered in conjunction with the questionnaire (see Appendix A) in order to establish the precise nature and context of the questions asked. 3.1 User Interface Design & Development Experience Of the twenty seven respondents, 77% design and develop user interfaces for applications running on non-mobile devices. When asked to make a self-assessment3 of their level of experience in terms of user interface design and development for non-mobile devices, the average rating amongst these respondents was 3.5. Audio-enhanced GUIs are designed and developed by just under one third (30%) of the respondents who design and develop user interfaces for non-mobile devices. Equating to 22% of all respondents, this would suggest that the design and development of audio-enhanced GUIs is not a common-place task amongst the sample of software engineers who responded to the survey. Indeed, of those who design and develop graphical user interfaces, only a relatively small proportion include audio-enhancement in their designs. Of those respondents who design and develop audio-enhanced GUIs for non-mobile devices, the average self-assessed level of expertise for user interface design and development is 4.5. If this is contrasted with the average self assessment level across respondents who do not include audio-enhancement in the user interfaces they design and develop for non-mobile devices (a rating of 3.1) it would suggest one of two things: (a) that those developers who build audio-enhanced GUIs for non-mobile devices are more experienced and thereby have a potentially higher skill level; or (b) that developers who build audio-enhanced GUIs for non-mobile devices have a higher opinion of their level of expertise than those who build purely visual GUIs. Although it is, unfortunately, not possible to determine which of these appraisals is more accurate on the basis of the results of this survey, as a group, the respondents who design and develop audio-enhanced GUIs for non-mobile devices demonstrate a high level of knowledge and understanding of acoustic and musical terminology (see later). Ten percent of the respondents who design and develop user interfaces for non-mobile devices categorise the underlying applications to which the interfaces provide access as safety-critical4. Although 50% of these applications were constructed by developers who use audio-enhancement in their user interface design for non-mobile devices, this amounts to only 1 in 6 of the respondents who employ audio-enhancement. Albeit within the scope of the sample population which responded to this survey, this would suggest that the majority of applications are not safety-critical in nature and that audio-enhancement is not being primarily used to affect safety-critical warnings. Marginally less than 20% of respondents design and develop user interfaces for applications running on mobile devices. All of these respondents also design and develop user interfaces for applications running on non-mobile devices. In contrast, however, only a quarter of respondents who design and develop user interfaces for applications running on non-mobile devices also design and develop user interfaces for mobile applications. In essence, therefore, everyone who creates user interfaces for mobile applications either does have or has had experience in terms of user interface development for non-mobile devices (approximately 20% of all respondents); on the other hand, developers of user interfaces for non-mobile applications have not necessarily had development experience in the field of mobile computing. Of those respondents who design and develop user interfaces for applications running on mobile devices, 40% also generate audio-enhanced user interfaces for non-mobile devices (the reciprocal of this measure is 33.3%). None of the mobile applications for which respondents design and develop user interfaces are safety-critical in nature. The average self-assessed experience level regarding the generation of user interfaces to mobile applications amongst the respondents who design and develop such user interfaces is 2.2. In contrast, for the same subset of respondents, their average self-assessed experience level for the creation of user interfaces to non-mobile applications was considerably higher at 4.8 - higher, in fact, than the equivalent measure across all respondents who design and develop user interfaces to non-mobile applications (a value of 3.5). Comparison of these self-assessed ratings would suggest that, despite what might be 3 On a scale of 1 to 5 where 1 was inexperienced and 5 was expert. 4 For example, interfaces to medical surgical equipment.

page 4

Page 5: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

described as low self-esteem regarding their experience of creating user interfaces to mobile applications, those developers who generate user interfaces to mobile applications are typically more experienced with regard to user interface creation in general. Although uncorroborated, this may conceivably be because the generation of non text-based user interfaces to mobile applications is a relatively immature field in which developers have not yet had the chance to become as confident as in the well established field of user interface design and development for non-mobile applications. Where applicable, respondents were asked to indicate the output modalities typically included in the user interfaces to mobile applications they develop. In particular, they were asked to identify which of the following output modalities were typically included in their designs for user interfaces to mobile devices: text; graphics; standard non-speech sounds such as standard system/error beeps; synthesised speech; and enhanced audio-feedback - for example, the use of musical quality and/or everyday sounds. For the appropriate subset of respondents, the percentage of respondents who use each of the output modalities listed is shown in Figure 1.

0%

20%

40%

60%

80%

100%

Text Graphics Standard Non-Speech Sound

Synthesised Speech Enhanced Audio-Feedback

Output Modality

% o

f R

esp

ond

ents

Figure 1 - Typical Percentage Use of Output Modalities Within User Interfaces to Mobile Applications As is evident from Figure 1, the most commonly used output modality is Text (100% of respondents typically include text in their user interface designs for mobile applications). Second to Text is Graphics at 80% use. Together, these figures illustrate the predominance of the visual output paradigm adopted in typical user interfaces designed for mobile applications. 3.2 Acoustic Experience/Knowledge As mentioned previously, the Audio Toolkit project has developed a toolkit (including a small selection of Java Swing-based widgets) which enables software developers to include sophisticated audio-feedback in their user interface designs. To test the hypothesis that the level of technical acoustic knowledge or experience is generally minimal amongst typical user interface developers, respondents were asked to provide a self-assessment of their knowledge/experience in this respect. They were also asked to indicate whether they design and generate sounds: (a) for inclusion in user interfaces; and (b) for use other than in human-computer interfaces. In each case, they were asked to provide a description of the sounds and their use if applicable. This section outlines their responses to the above questions. The average self-assessed level of experience5 of acoustics amongst the user interface developers who responded to the survey is 1.9. This would suggest that developers are typically less confident in their acoustic expertise than in their user interface design and development experience - a further indicator of the under-exploitation of sophisticated sound in typical human-computer user interfaces. Furthermore,

page 5

5 Again, on a scale of 1 to 5 where 1 was inexperienced and 5 was expert.

Page 6: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

respondents' general lack of confidence in the field of acoustics highlights a need to target the Audio Toolkit documentation at such a level as to make it accessible to developers who have little or no knowledge of acoustics. Only 26.9% of respondents claimed to design and develop sounds for inclusion in user interface designs. However, 83.3% of the respondents who design and develop audio-enhanced GUIs for non-mobile devices generate sounds for use in their user interfaces. Conversely, 71.4% of the respondents who generate sounds for inclusion in user interface designs also design and develop (for non-mobile devices) user interfaces that are enhanced with audio-feedback. These results suggest that, although the majority of developers who create audio-enhanced GUIs for non-mobile applications also generate the sounds which they include in their designs, over one quarter of those respondents who generate sounds for use in human-computer interfaces do not design user interfaces which include these sounds. Of the respondents who design and develop user interfaces for mobile devices, 60% generate sounds for inclusion in software application user interfaces. In contrast, only 42.9% of those respondents who generate sound for inclusion in user interfaces to software applications design and develop user interfaces to mobile applications. When compared with those above for audio-enhanced non-mobile application development, these results would suggest that at present sound is typically used to a greater extent within non-mobile application user interfaces than in mobile application user interfaces. The disparity between the relative number of respondents who generate and those who use sound in their user interface designs (both for mobile and non-mobile applications) suggests that: (1) currently developers of user interfaces in which sound is included typically have to resort to generating sounds which suit their needs (as opposed to being able to use 'off-the-shelf' audio-enhancement facilities); and (2) there are several software engineers who generate sounds for use by others. The fact that so many of the developers who are designing human-computer interfaces which include sound have to generate their own audio-feedback further suggests that the sounds being generated by group (2) above are insufficient to meet their needs. Consider now, various experience levels for respondents who generate sounds for inclusion in user interfaces to software applications. In contrast to the average self-assessed experience level with respect to acoustics across all respondents (that is, 1.9), the same measure across only those respondents who generate sounds for inclusion in user interfaces to software applications is 3.2. This average rating (3.2) is identical to that for those respondents who design and develop audio-enhanced user interfaces for non-mobile applications. On average, respondents who do not generate sounds for inclusion in software user interfaces only rate their level of acoustic knowledge/experience at 1.5. As one might expect, the self-assessed rating for acoustic knowledge/experience is considerably higher amongst the subsets of respondents who generate sounds for inclusion in software application user interfaces and who design and develop audio-enhanced user interfaces to non-mobile applications than it is across all respondents - the disparity being further increased when compared with the average self-assessed rating for those subjects who do not generate sounds. For those respondents who generate sounds for inclusion in human-computer interfaces, the average self-assessed level of experience of user interface design and development for mobile applications is 2.7 and for non-mobile applications is 3.4. This illustrates that respondents are more confident in their ability to design and develop user interfaces to non-mobile applications than mobile applications, irrespective of their experience of sound generation. Conversely, the average self-assessed experience levels in terms of acoustics across respondents who design and develop user interfaces for non-mobile applications and mobile applications are 2.2 and 2.3 respectively. The proximity of these average ratings would suggest that the mobility of the device for which respondents typically create user interfaces has little impact upon their general understanding of acoustics. A mere 11.5% of respondents generate sounds for use other than in software application user interfaces and all of those generated are for the purpose of creating/composing music. Of those respondents who generate sounds for inclusion in software application user interfaces, 14.3% also generate sounds for other uses. One third of the respondents who generate sounds for use other than in software (i.e. for musical composition) also generate sounds for use in software application user interfaces and the same proportion

page 6

Page 7: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

design and develop audio-enhanced user interfaces to non-mobile applications6. All of the respondents who generate sounds for use other than in software applications design and develop user interfaces to non-mobile applications whereas only one third of them design and develop user interfaces to mobile applications. Twenty percent of the respondents who design and develop user interfaces to mobile applications also generate sounds for non-software purposes compared to 15% of respondents who design and develop user interfaces to non-mobile applications. The average self-assessed experience level with respect to acoustics amongst the respondents who generate sounds for use other than in software is 3.0 (compared to 3.2 for those who generate sounds for use in software application user interfaces). This suggests that the purpose for which developers generate sounds has little impact upon their subjective measure of acoustic knowledge. Conversely, the purpose for which developers generate sounds has little or no impact on the average self-assessed experience level for non-mobile and mobile application user interface design and development (a value of 1.0 in each case). Of all respondents, the subset with the highest average self-assessed level of experience with regard to acoustics is the group who both design and generate sounds for use in software application user interfaces and for use other than in software (an average value of 4.0). 3.3 Musical Experience/Knowledge In a similar fashion to acoustics, it was hypothesised that the extent to which software developers are familiar with music (in a technical sense) will affect and will be reflected in their confidence and experience in the field of user interface development. Unlike acoustics, however, it was also hypothesised that software developers typically have a considerable basis of technical musical knowledge and are more comfortable with musical concepts than those particular to acoustics. This section reports on the findings of the survey with respect to the level of technical musical knowledge/experience typically held by user interface developers. Respondents were asked to provide a self-assessment of their technical knowledge or experience of music7. The average subjective rating across all respondents was 2.2 (higher than the equivalent rating for acoustics - see section 3.2). Approximately 77% of respondents play (or have in the past played) a musical instrument8. Amongst these respondents, the average self-assessed level of technical musical knowledge is 2.6 (as one might expect, higher than the average across all respondents which includes ratings for respondents who have never played a musical instrument). The correspondence between software engineers' instrumental musical ability and their user interface development and sound generation activities was ascertained by cross referencing this information across the applicable subsets of respondents. Figure 2 details the results of this cross-referral. These figures indicate that a very large proportion (80.0%) of respondents who design and develop user interfaces to non-mobile applications play or have played a musical instrument. Unsurprisingly, all of the respondents who generate sounds for user other than in software (as shown in section 3.2 this is primarily for musical composition) play or have played a musical instrument. More than two thirds of the respondents who design and develop audio-enhanced GUIs for non-mobile applications play or have played a musical instrument. These figures suggest that instrumental musical ability is common amongst respondents who design and develop user interfaces, but most importantly, is strongly evident amongst the subset of developers who design and develop user interfaces to non-mobile applications and in particular audio-enhanced GUIs. It is interesting to note that a considerably lower percentage of the developers who create mobile application user interfaces have instrumental musical experience.

6 Conversely, only 16.7% of respondents who design and develop audio-enhanced user interfaces to non-mobile applications also generate sounds for use other than in software. 7 Once again, on a scale of 1 to 5 where 1 was inexperienced and 5 was expert. 8 Throughout the remainder of this report, 'musical instrument' is taken to include voice.

page 7

Page 8: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

Play

mus

ical

inst

rum

ent

Des

ign

& d

evel

op a

udio

-enh

ance

d G

UIs

Des

ign

& g

ener

ate

soun

ds fo

r use

ot

her t

han

in so

ftwar

e

Des

ign

& g

ener

ate

soun

ds fo

r use

in

softw

are

appl

icat

ion

user

inte

rfac

es

Des

ign

& d

evel

op U

Is fo

r non

-mob

ile

appl

icat

ions

Des

ign

& d

evel

op U

Is fo

r mob

ile

appl

icat

ions

Des

ign

& d

evel

op U

Is fo

r mob

ile a

nd

non-

mob

ile a

pplic

atio

ns

Play musical instrument 20.0% 15.0% 20.0% 80.0% 10.0% 10.0%

Design & develop audio-enhanced GUIs

66.7%

Design & generate sounds for use other than in software

100.0%

Design & generate sounds for use in software application user interfaces

57.1%

Design & develop UIs for non-mobile applications

80.0%

Design & develop UIs for mobile applications

40.0%

Design & develop UIs for mobile and non-mobile applications

40.0%

Cross referencing respondents' ability to play a musical instrument with their user interface development and sound generation activities. The relationship between the columns and the rows is an "also" relationship. For example, cell (row 1, column 2) should be read as "20% of respondents who play a musical instrument also design and develop audio-enhanced GUIs" and cell (row 2, column 1) should be read as "66.7% of respondents who design and develop audio-enhanced GUIs also play a musical instrument"

also

Figure 2 - Cross referencing instrumental musical ability with user interface design & development and sound generation

activities Consider now the variation in average subjective measure of experience (not only with respect to user interface design and development but also with respect to acoustics) according to whether or not respondents play or have played a musical instrument.

for respondents who: Type of Experience/Knowledge Play/played a musical

instrument Have never played a musical instrument

Design & development of non-mobile application UIs 3.3 4.5

Design & development of mobile application UIs 1.0 3.0

Acoustics 2.0 1.6 Figure 3 - Average subjective measures of user interface design & development experience and acoustic knowledge relative

to instrumental musical ability It is interesting to note that, with the exception of acoustic knowledge, respondents who have no instrumental musical knowledge rate their level of experience in terms of both mobile and non-mobile application user interface development higher than those respondents who play or have played a musical instrument. On the basis of this survey, no explanation can be proffered as to why this is the case. However, purely at the level of conjecture, the difference in average self-assessed ratings across the two

page 8

Page 9: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

groups may be a consequence of differences in personality traits between musicians and non-musicians. It is important to note, however, that respondents who play or have played a musical instrument are more confident in their understanding of acoustics than those respondents who have never played a musical instrument. Seventy percent (70%) of respondents who play or have played a musical instrument receive or have received formal musical tuition of some description.

for respondents who play or have played a musical instrument and: Type of

Experience/Knowledge Have had formal musical tuition

Have not had formal musical tuition

Design & development of non-mobile application UIs 3.2 3.3

Design & development of mobile application UIs 1.0 1.0

Acoustics 1.9 2.2

Figure 4 - Average subjective measures of user interface design & development experience and acoustic knowledge relative to instrumental musical ability combined with receipt of formal musical tuition

For the respondents who play or have played a musical instrument, their average self-assessed experience levels for both mobile and non-mobile application user interface development and self-assessed knowledge level for acoustics has been calculated with respect to their receipt of musical tuition and is shown in Figure 4. The figures shown in this table suggest that respondents' receipt of formal musical tuition has little effect on their average experience ratings in relation to either mobile or non-mobile application user interface development. In contrast, where respondents who play or have played a musical instrument have received no formal musical tuition, they appear more confident in their knowledge of acoustics than do respondents who have had formal musical tuition. Albeit contrary to that which might have been expected, this result may be due to the fact that the more respondents have been formally taught about music the greater their appreciation of its complexity and therefore the less self-assured they are when asked about their level of knowledge of acoustics. Further investigation, beyond the scope of this survey, would be required to identify the actual cause of this difference. Only one third (33.3%) of those respondents who design and develop audio-enhanced user interfaces to non-mobile applications play or have played a musical instrument and either receive or have received formal musical tuition. Conversely, only 14.3% of those respondents who play or have played a musical instrument and receive or have received formal musical tuition design and develop audio-enhanced user interfaces to non-mobile applications. This suggests that despite their involvement in audio-based user interface design, the majority of designers of audio-enhanced user interfaces do not have a solid musical foundation from which to draw experience and knowledge. The figures also highlight how few software engineers who have such a musical background are directly utilising that knowledge and experience in their user interface designs. Figure 5 shows further cross referrals between respondents' combined musical ability and formal musical tuition and their user interface design and sound generation activities. Since the pattern of relative percentages here follows that shown in Figure 2 it would seem that the receipt of formal musical tuition does not have a particularly noticeable impact on developers user interface design and sound generation activities.

page 9

Page 10: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

Play

(ed)

mus

ical

inst

rum

ent &

re

ceiv

e(d)

form

al m

usic

al tu

ition

Des

ign

& d

evel

op a

udio

-enh

ance

d G

UIs

Des

ign

& g

ener

ate

soun

ds fo

r use

ot

her t

han

in so

ftwar

e

Des

ign

& g

ener

ate

soun

ds fo

r use

in

softw

are

appl

icat

ion

user

inte

rfac

es

Des

ign

& d

evel

op U

Is fo

r non

-mob

ile

appl

icat

ions

Des

ign

& d

evel

op U

Is fo

r mob

ile

appl

icat

ions

Des

ign

& d

evel

op U

Is fo

r mob

ile a

nd

non-

mob

ile a

pplic

atio

ns

Play(ed) musical instrument & receive(d) formal musical tuition

14.3% 14.3% 7.1% 85.7% 7.1% 7.1%

Design & develop audio-enhanced GUIs

33.3%

Design & generate sounds for use other than in software

66.7%

Design & generate sounds for use in software application user interfaces

14.3%

Design & develop UIs for non-mobile applications

60.0%

Design & develop UIs for mobile applications

20.0%

Design & develop UIs for mobile and non-mobile applications

20.0%

Cross referencing respondents' combined ability to play a musical instrument and receipt of formal musical tuition with their user interface development and sound generation activities. The relationship between the columns and the rows is an "also" relationship. For example, cell (row 1, column 2) should be read as "14.3% of respondents who play(ed) a musical instrument and receive(d) formal musical tuition also design and develop audio-enhanced GUIs" and cell (row 2, column 1) should be read as "33.3% of respondents who design and develop audio-enhanced GUIs also play(ed) a musical instrument and receive(d) formal musical tuition"

also

Figure 5 - Cross referencing combined instrumental musical ability and receipt of formal musical tuition with user interface

design & development and sound generation activities Slightly more than one third (34.6%) of respondents have formal musical qualifications9 - amounting to 45% of those respondents who play or have played a musical instrument. More than half (57.1%) of respondents who play or have played a musical instrument and have or have had formal musical tuition have formal musical qualifications. In real terms, 30% of all respondents have the above extent of musical abilities. Whilst this is a relatively large proportion to have such extensive musical knowledge and capabilities, when documenting the Audio Toolkit and associated guidelines, it should be borne in mind that over two thirds of a typical audience will not have this foundation and as such it cannot therefore be assumed. Of those respondents who design and develop audio-enhanced GUIs for non-mobile applications, 16.7% have formal musical qualifications (the converse of this being that 11.1% of respondents who have formal musical qualifications create audio-enhanced GUIs for non-mobile applications). Although the developers of such audio-enhanced GUIs have potentially absorbed considerable musical knowledge as a result of their development activities, these figures show that software engineers undertaking this type of user interface development typically have little or no formal musical qualifications to support their design effort. Naturally this has substantial implications for the provision of any audio-based facility for user interface development.

page 10

9 For example, they have sat formal music-related examinations - either practical or theoretical - such as Standard or Higher Grade Music or the Associated Board practical and theoretical instrumental graded examination series.

Page 11: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

Surprisingly, only 33.3% of respondents who generate sounds for use other than in software applications (typically for the purpose of musical composition - see section 3.2) have formal musical qualifications and only 11.1% of respondents who have formal musical qualifications generate such sounds. Slightly more than one quarter (28.6%) of respondents who generate sounds for inclusion in software application user interfaces have formal musical qualifications - the converse of this being that 22.2% of those respondents who claim to have formal musical qualifications generate sounds for this type of use. In essence, therefore, this means that the majority of developers who generate the sounds that are being included in the user interfaces to software applications have no formal qualifications upon which to base their creative process. Once again, this has important implications for the distribution of development facilities for audio-based user interface design.

Play

(ed)

mus

ical i

nstru

men

t, re

ceiv

e(d)

fo

rmal

mus

ical

tuiti

on &

hav

e fo

rmal

mus

ical

quali

ficat

ions

Des

ign

& d

evel

op a

udio

-enh

ance

d G

UIs

Des

ign

& g

ener

ate

soun

ds fo

r use

ot

her t

han

in so

ftwar

e

Des

ign

& g

ener

ate

soun

ds fo

r use

in

softw

are

appl

icat

ion

user

inte

rfac

es

Des

ign

& d

evel

op U

Is fo

r non

-mob

ile

appl

icat

ions

Des

ign

& d

evel

op U

Is fo

r mob

ile

appl

icat

ions

Des

ign

& d

evel

op U

Is fo

r mob

ile a

nd

non-

mob

ile a

pplic

atio

ns

Play(ed) musical instrument, receive(d) formal musical tuition & have formal musical qualifications

12.5% 12.5% 12.5% 62.5% 12.5% 12.5%

Design & develop audio-enhanced GUIs

16.7%

Design & generate sounds for use other than in software

33.3%

Design & generate sounds for use in software application user interfaces

14.3%

Design & develop UIs for non-mobile applications

25.0%

Design & develop UIs for mobile applications

20.0%

Design & develop UIs for mobile and non-mobile applications

20.0%

Cross referencing respondents' combined ability to play a musical instrument, receipt of formal musical tuition and formal musical qualifications with their user interface development and sound generation activities. The relationship between the columns and the rows is an "also" relationship. For example, cell (row 1, column 2) should be read as "12.5% of respondents who play(ed) a musical instrument, receive(d) formal musical tuition, and have formal musical qualifications also design and develop audio-enhanced GUIs" and cell (row 2, column 1) should be read as "16.7% of respondents who design and develop audio-enhanced GUIs also play(ed) a musical instrument, receive(d) formal musical tuition, and have formal musical qualifications"

also

Figure 6 - Cross referencing combined instrumental musical ability, receipt of formal musical tuition and formal musical

qualifications with user interface design & development and sound generation activities Twenty percent (20.0%) of respondents who design and develop user interfaces for mobile applications and 30.0% of respondents who design and develop user interfaces for non-mobile applications have formal musical qualifications. This is the same as saying that 66.7% and 11.1% of respondents who have musical qualifications design and develop user interfaces to non-mobile and mobile applications respectively. Only 11.1% of respondents who have formal musical qualifications design and develop user interfaces for mobile applications and for non-mobile applications. Although this has no direct bearing

page 11

Page 12: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

on their audio-based activities, it illustrates the knowledge base of the target user group for the Audio Toolkit and in so doing highlights the fact that little can be assumed about software engineers' formal musical background when documenting the Audio Toolkit and associated guidelines. Figure 6 shows the results of cross referring respondents' combined ability to play a musical instrument, receipt of musical tuition, and formal musical qualifications with their user interface development and sound generation activities. More than half (57.7%) of respondents can read (or at least decrypt) sheet music. For those respondents that can read10 sheet music, their average self-assessed technical experience/knowledge level of music is almost double that of the subset of respondents who cannot read sheet music (values of 2.9 and 1.4 respectively). This suggests that respondents' ability to read sheet music has substantial influence over their subjective measure of musical expertise - especially given that only 64.3% of respondents who can read sheet music have formal musical qualifications11. Since only three quarters (75.0%) of respondents who can play a musical instrument can read sheet music (as opposed to the fact that all respondents who can read sheet music play a musical instrument) and there are considerably more respondents who can play an instrument than can read sheet music, respondents' ability to read sheet music cannot be assumed according to their ability to play a musical instrument. Furthermore, since respondents' ability to read sheet music has a greater beneficial affect on their subjective measure of technical musical knowledge than their ability to play a musical instrument (average ratings of 2.9 and 2.6 respectively) it would appear that ensuring a basic understanding of written music may be of considerable value to the acceptance and understanding of the Audio Toolkit. Approximately 93% of respondents who have received formal musical tuition can read sheet music and, conversely, 86.7% of respondents who can read sheet music have received formal musical tuition. This suggests that receipt of formal musical tuition has - as one would expect - observable influence over respondents' ability to read music. That said, there is a subset of respondents who have been given musical tuition but have not been taught to read sheet music and additionally, there is a subset of respondents who can read sheet music but who have apparently been self (or informally taught) to do so. Only 53.3% of all respondents who play a musical instrument possess the maximum musical background - that is, they play a musical instrument, have been formally tutored, have formal musical qualifications, and can read sheet music. Figure 7 shows cross referral of respondents' ability to read sheet music with their user interface development and sound generation activities. It illustrates that only 50% of respondents who design and develop audio-enhanced user interfaces to non-mobile applications can read sheet music and less than that proportion of respondents who design and generate sounds for use in software application user interfaces can read sheet music. Given that respondents' ability to read sheet music has been shown to increase their confidence in their understanding of music, this suggests a requirement to address, at least at a basic level, software engineers facility to read and understand written music in order to make more accessible the Audio Toolkit and associated guidelines.

10 From this point forward, when discussing sheet music 'read' should be taken to mean 'read or decrypt'. 11 Unsurprisingly, all respondents who have formal musical qualifications can read sheet music.

page 12

Page 13: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

Can

read

shee

t mus

ic

Des

ign

& d

evel

op a

udio

-enh

ance

d G

UIs

Des

ign

& g

ener

ate

soun

ds fo

r use

ot

her t

han

in so

ftwar

e

Des

ign

& g

ener

ate

soun

ds fo

r use

in

softw

are

appl

icat

ion

user

inte

rfac

es

Des

ign

& d

evel

op U

Is fo

r non

-mob

ile

appl

icat

ions

Des

ign

& d

evel

op U

Is fo

r mob

ile

appl

icat

ions

Can read sheet music

20.0% 20.0% 20.0% 80.0% 13.3%

Design & develop audio-enhanced GUIs

50.0%

Design & generate sounds for use other than in software

100.0%

Design & generate sounds for use in software application user interfaces

42.9%

Design & develop UIs for non-mobile applications

60.0%

Design & develop UIs for mobile applications

40.0%

Cross referencing respondents' ability read sheet music with their user interface development and sound generation activities. The relationship between the columns and the rows is an "also" relationship. For example, cell (row 1, column 2) should be read as "20.0% of respondents who can read sheet music also design and develop audio-enhanced GUIs" and cell (row 2, column 1) should be read as "50.0% of respondents who design and develop audio-enhanced GUIs also can read sheet music"

also

Figure 7 - Cross referencing ability to read sheet music with user interface design & development and sound generation

activities Figure 8 shows the subjective measures of experience for both user interface development and acoustics according to respondents' ability to read sheet music.

For respondents who: Type of Experience/Knowledge Can read sheet music Cannot read sheet

music Design & development of non-mobile application UIs 3.3 3.8

Design & development of mobile application UIs 1.0 3.0

Acoustics 2.1 1.7

Figure 8 - Average subjective measures of user interface design & development experience and acoustic knowledge relative to respondents' ability to read sheet music

The figures in the table above suggest that respondents' ability to read sheet music has no beneficial affect on their subjective measure of experience with respect to either non-mobile or mobile application user interface development. In fact, it could be suggested that ability to read sheet music appears to reduce respondents' self-assessment of their experience in user interface development. However, given that the majority of user interfaces being developed include no audio enhancement, this observation is of little direct consequence. What is more interesting is that respondents' ability to read sheet music corresponds to a higher average self-assessed rating of acoustic knowledge. This suggests that, when focussing on audio-related experience, software engineers have a heightened level of understanding if they have the ability to read music. Once again, this reinforces the importance of the ability to read sheet music when

page 13

Page 14: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

working with audio-enhancement of human-computer user interfaces, regardless of the mobility of the platform on which the user interfaces are to run. 3.4 Familiarity With Acoustical Software Engineering and Musical Terminology To assess the accuracy with which software engineers interpret acoustic and musical terminology and thereby to determine the level of terminological understanding that might be assumed within the target user group for the Audio Toolkit, respondents were presented with a series of acoustic and musical terms. For each term, the respondents were asked to indicate whether the term was familiar or unfamiliar to them and, where familiar, to provide a brief definition of the term as they understood it. Respondents were asked to refrain from referring to a dictionary or technical text in order to complete this section of the questionnaire (which can be seen in Appendix A). Via collaboration and agreement between a musical expert and an acoustic expert, respondents' interpretations were assessed for correctness. The interpretations were awarded a 'score' of 0.5 if they represented a partially correct and/or incomplete definition and were awarded a 'score' of 1.0 if they were wholly accurate. This section outlines the results of this investigation12.

Figure 9 - Total score per term across all respondents

0 5 10 15

Audito ry Icon

Audito ry Stream

ord

Earcon

rmony

Media

Mod ality

Motif

tave

tch

Presentat ional Reso urce

g is ter

emitone

Sound Hab ituat ion

So und Intens ity

Sound Masking

Sus tained So und

imb re

ne

Trans ient So und

Aco

ust

ic/

Mu

sica

l Ter

m

Total Score Across All Respondents(Max = 26)

T* To*

Re* S*

Oc* Pi*

Ha*

Ch* * musical term

Figure 9 shows the total score across all respondents for each term (terms marked with * are considered musical terms, the rest being acoustical software engineering terms13). For each term, the maximum total score that was possible was 26 (the number of respondents). As illustrated in Figure 9, no term achieved a maximum score - in fact, they all failed to achieve a 50% score. In general, as highlighted by this and following analysis of the results, respondents were more familiar with and as a consequence provided more accurate definitions for musical terms than for acoustical software engineering terms. On average, respondents only provided a correct definition for 7 out of the 20 terms investigated and achieved a score of 0.5 or 1.0 for 52.5% of terms with which they claimed familiarity and for which they provided a definition. When this measure is restricted to only those terms for which respondents achieved a complete and wholly accurate score (i.e. 1.0) it drops considerably to 27.9%. In essence, across the collective of musical and acoustical software engineering terms, respondents were unable to achieve even

page 14

12 With hindsight, the term 'acoustic signature' was eliminated from analysis given lack of an agreed definition for assessment purposes. 13 Terms have been categorised as musical or acoustical software engineering on the basis of the field in which they first appeared.

Page 15: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

partial correctness for almost half of the terms listed. This suggests that for any audio-related documentation directed at typical user interface developers to be effective, the use of specialised terminology will have to be carefully monitored and, where its use cannot be avoided, it will need to be clearly explained and defined. Now consider respondents' familiarity with and accuracy of definition of musical terms and acoustical software engineering terms. On average, respondents claimed to be familiar with 6 out of 8 of the musical terms but only 5.5 out of 12 of the acoustical software engineering terms. Furthermore, respondents managed, on average, to correctly define almost 50% of the musical terms but only just over one quarter of the acoustical software engineering terms were defined correctly.

Average % correct

@ 0.5+

Average % correct

@ 1.0 Musical Terms 53.9 27.9 Acoustic Terms 46.7 27.4

Figure 10 - Average % of familiar terms that were defined correctly (within the given limits) across musical and across

acoustic terminology Treating them as independent categories, Figure 10 lists the average percentage of familiar musical and acoustical software engineering terms which respondents defined correctly within the two assessed levels of correctness. It shows that respondents were able to achieve a minimal level of correctness for more than half of the identified musical terms compared with less than half for the acoustical software engineering terms. This observation is compounded by the previously stated indication that on average respondents were familiar with 75% of musical terminology but only 46% of acoustical software engineering terminology. Sections 3.2 and 3.3 clearly identified differences in respondents' acoustic and musical background and illustrated the impact said knowledge had on their experience of user interface development and sound generation activities. Therefore, having witnessed the difference in reported familiarity and accuracy of definition according to term type across all respondents, consider now the variation in the above measures (that is term familiarity and accuracy of definition) according to respondents' characteristics. To assess the influence respondents' knowledge and background14 exerted over their familiarity with both musical and acoustical software engineering terminology, the following categories of respondents were identified on the basis of the preceding sections: all respondents; respondents who play or have played a musical instrument; respondents who receive or have received formal musical tuition; respondents with formal musical qualifications; respondents classified as acoustics experts15; respondents classified as music experts; respondents who build audio-enhanced GUIs for non-mobile applications; respondents classified as expert non-mobile application user interface developers; respondents classified as expert mobile application user interface developers; respondents who develop user interfaces to non-mobile applications (regardless of their level of expertise); and respondents who develop user interfaces to mobile applications (again regardless of their level of expertise). For each of these subsets of respondents, the following analysis was performed: the number of respondents within each subset to whom each acoustical software engineering/musical term was familiar (see Figure 11); for those terms that were considered familiar by each subset of respondents, the percentage of terms that were answered correctly (with a score of 0.5 or 1.0) within that group (see Figure 12); and for those terms that were considered familiar by each subset of respondents, the percentage of terms that were answered correctly (with a score of 1.0) within that group (see Figure 13).

14 In terms of their musical experience, acoustic experience, and user interface development experience. 15 Respondents were classified as 'expert' if their self-assessed experience level was higher than the average across all respondents.

page 15

Page 16: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

0

5

10

15

20

25

t o t a l numb er o f familia r t e rms

numb er familia rwho p lay ins t rument

numb er familia r wit hfo rmal mus ica l t uit io n

numb er familia r wit hmus ica l q ualifica t io ns

numb er familia r fo raco us t ics exp ert s

numb er familia r fo rmus ic exp ert s

0

5

10

15

20

25

numb er familia r fo r aud io -enhanced GUI b uild ers

numb er familia r fo rexp ert no n-mo b ile UId evelo p ers

numb er familia r fo rexp ert mo b ile UI d evelo p ers

numb er familia r fo rno n-mo b ile UI d evelo p ers

numb er familia r fo rmo b ile UI d eve lo p ers

Acoustic/Musical Term

Nu

mb

er o

f R

esp

on

den

ts T

o W

ho

m T

erm

is

Fam

ilia

r

Figure 11 - Number of respondents within identified subset of respondents to whom each acoustical software engineering/musical term is familiar

Page 17: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

0.0

20.0

40.0

60.0

80.0

100.0

o vera ll % 'co rrec t ' a t 0 .5 +

fo r t ho s e who p lay ins t rument

fo r t ho s e who have rece ivedfo rmal mus ica l t uit io n

fo r t ho s e wit h mus ica lq ualifica t io ns

fo r aco us t ics exp ert s

fo r mus ic exp ert s

0 .0

20.0

40.0

60.0

80.0

100.0

fo r aud io -enhanced GUIb uild ers

fo r exp ert no n-mo b ile UId evelo p ers

fo r exp ert mo b ile UId evelo p ers

fo r no n-mo b ile UI d evelo p ers

fo r mo b ile UI d evelo p ers

Acoustic/M usical Term

% O

f C

orr

ect

Res

po

nse

s(at

0.5

+)

Figure 12 - For those terms that were considered familiar by each subset of respondents, the % of terms that were answered correctly (with a score of 0.5 or 1.0) within that group

Page 18: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

0.0

20.0

40.0

60.0

80.0

100.0

o vera ll % 'co rrec t ' a t 1 .0

fo r t ho s e who p lay ins t rument

fo r t ho s e who have rece ivedfo rmal mus ica l t uit io n

fo r t ho s e wit h mus ica lq ualifica t io ns

fo r aco us t ics exp ert s

fo r mus ic exp ert s

0 .0

20.0

40.0

60.0

80.0

100.0

fo r aud io -enhanced GUIb uild ers

fo r exp ert no n-mo b ile UId evelo p ers

fo r exp ert mo b ile UId evelo p ers

fo r no n-mo b ile UI d evelo p ers

fo r mo b ile UI d evelo p ers

Acoustic/M usical Term

% O

f C

orr

ect

Res

po

nse

s(at

1.0

)

Figure 13 - For those terms that were considered familiar by each subset of respondents, the % of terms that were answered correctly (with a score of 1.0) within that group

Page 19: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

The previous three graphs show the detailed and raw figures for each of the calculations. The following discussion will summarise what is shown in these graphs. Consider first, the number of respondents in each group who were familiar with each of the terms. From the detail shown in Figure 14, the following table summarises: the maximum percentage of respondents in each group that were familiar with any given term (for example, a maximum of 95% of respondents who play or have played musical instrument are familiar with any given term); the number of terms for which this maximum percentage applies; the actual terms for which this maximum percentage applies; and, across all terms, the average percentage of respondents to whom any given term was familiar (for example, on average 63.5% of respondents who play or have played a musical instrument are familiar with any given term).

Respondents

Max % of Respondents Familiar With

Any Given Term

Number of Terms For Which Max

% Was Recorded

Terms Most Familiar to Group

Average % of

Respondents Familiar

With Any Given Term

All 84.6% 2 Harmony, Pitch 57.7% Play(ed) musical instrument 95.0% 4 Chord, Harmony, Octave, Pitch 63.5%

Receive(d) musical tuition 100.0% 4 Chord, Harmony, Octave, Semitone 69.3%

Have musical qualifications 100.0% 5 Chord, Harmony, Octave, Pitch, Semitone 67.8%

Acoustics experts 92.3% 3 Pitch, Timbre, Tone 68.8%

Music experts 100.0% 7 Chord, Harmony, Octave, Pitch, Semitone, Timbre, Tone

76.5%

Audio-enhanced GUI builders 100.0% 3 Auditory Icon, Earcon, Modality 66.7%

Expert non-mobile application UI developers

90.9% 2 Earcon, Media 61.4%

Expert mobile application UI developers

100.0% 9 Auditory Icon, Earcon, Harmony, Media, Pitch, Sound Habituation, Sound Intensity, Sound

Masking, Tone

60.0%

Non-mobile application UI developers

90.0% 1 Tone 63.2%

Mobile application UI developers 100.0% 1 Media 59.0%

Figure 14 - Summary of Term Familiarity Within Subsets of Respondents

From the above table, two important points become immediately obvious: (1) that Music Experts are, in general, more familiar with the terminology presented (for any given term, on average more than three quarters of these respondents reported familiarity); and (2) that the terms with which the majority of respondents are familiar are in fact musical terms (for instance, Chord, Harmony, Octave and Pitch). Interestingly, as might have been expected, the terms most familiar to audio-enhanced GUI developers as a group are Auditory Icon, Earcon, and Modality. Based on the detail shown in Figure 12, Figure 15 summarises the maximum percentage of respondents in each group who gave a correct definition for any given term (at the 0.5+ level of correctness) - for example, there were 3 terms for which all of the respondents who play or have played a musical instrument gave a correct answer (at the 0.5+ correctness level). Additionally, Figure 15 shows the number of terms for which this maximum occurred and lists the specific terms for which this is the case. Finally, it highlights the average percentage of respondents who provided a correct definition (at the 0.5+ level of correctness) for any given term.

page 19

Page 20: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

Respondents

Max % of Respondents Gave

Correct (0.5+) Definitions For

Any Given Term

Number of Terms For Which Max

% Was Recorded

Terms For Which Max % of Respondents Gave Correct

Definition at Level 0.5+

Average % of Respondents

Giving Correct (0.5+) Definition For Any Given

Term

All 80.0% 1 Chord 58.5% Play(ed) musical instrument 100.0% 3

Sound Habituation, Tone, Transient Sound 67.6%

Receive(d) musical tuition 100.0% 2 Sound Habituation, Timbre 68.0%

Have musical qualifications 100.0% 2 Motif, Sound Habituation 61.1%

Acoustics experts 100.0% 1 Register 65.5%

Music experts 100.0% 2 Register, Sound Intensity 68.3% Audio-enhanced GUI builders

100.0% 10

Auditory Stream, Chord, Octave, Pitch, Presentational Resource,

Semitone, Sound Intensity, Sound Masking, Timbre, Transient Sound

71.1%

Expert non-mobile application UI developers

85.7% 3 Sound Habituation, Sound Intensity,

Timbre

55.2%

Expert mobile application UI developers

100.0% 4 Chord, Modality, Octave, Timbre 30.0%

Non-mobile application UI developers

80.0% 1 Sustained Sound 56.5%

Mobile application UI developers 100.0% 3 Octave, Semitone, Timbre 49.7%

Figure 15 - Summary of percentage of correct responses within each respondent subset (at 0.5+ level of correctness)

From Figure 15 it can be seen that in the majority of groups, all respondents managed to correctly define at least one term. There is little to differentiate the musical and acoustic groupings in terms of the average percentage of respondents in each group to correctly define (at the 0.5+ level of correctness) any given term. Relatively speaking, the largest average group percentage for correct definition of any given term falls with the developers of audio-enhanced GUIs - on average, 71.1% of respondents in this group got any given definition correct at the 0.5+ level. The group which performs worst in this respect is the subset of respondents who are expert mobile application user interface developers. Interestingly, the terms defined correctly by the maximum percentage of respondents in each group do not correspond to the terms with which the largest percentage of respondents in each group are familiar. For instance, the maximum of 95% of respondents who play or have played a musical instrument are familiar with the terms Chord, Harmony, Octave, and Pitch (see Figure 11 and Figure 14) but it is for the terms Sound Habituation, Tone, and Transient Sound that the maximum percentage of respondents in this subset gave a correct definition. The reason for this terminological change in focus is unclear. However, at the level of speculation, it may be that although respondents are familiar with certain terms - perhaps because they are in everyday use - they may not be able to define them correctly. Perhaps what is even more interesting is that, as shown in Figure 11, it is some of the terms with which fewest respondents are familiar that have registered the greatest percentage of correct definitions - for example, Presentation Resource, Sound Habituation, and Auditory Stream. In these cases, it is likely that the respondents have been able to correctly define the terms by a process of deduction and guess work rather than as a consequence of actually knowing the correct technical definition. Figure 16 summarises the maximum percentage of respondents in each group who gave a correct definition for any given term (at the 1.0 level of correctness) - for example, there were 2 terms for which all of the respondents who play or have played a musical instrument gave a correct answer (at the 1.0 correctness level). As with the previous figure, Figure 16 also shows the number of terms for which this maximum occurred and lists the specific terms for which this is the case. Finally, it highlights the average

page 20

Page 21: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

percentage of respondents who provide a correct definition (at the 1.0 level of correctness) for any given term.

Respondents

Max % of Respondents Gave

Correct (1.0) Definitions For

Any Given Term

Number of Terms For Which Max

% Was Recorded

Terms For Which Max % of Respondents Gave Correct

Definition at Level 1.0

Average % of Respondents

Giving Correct (1.0) Definition For Any

Given Term

All 76.9% 1 Sustained Sound 32.9% Play(ed) musical instrument 100.0% 2 Sound Habituation, Transient Sound 39.0%

Receive(d) musical tuition 100.0% 1 Sound Habituation 39.0%

Have musical qualifications 100.0% 1 Sound Habituation 35.8%

Acoustics experts 88.9% 1 Sustained Sound 35.9%

Music experts 100.0% 1 Sound Intensity 33.3% Audio-enhanced GUI builders

100.0% 1 Transient Sound 39.8%

Expert non-mobile application UI developers

60.0% 3 Sound Habituation, Sound Masking,

Transient Sound

29.3%

Expert mobile application UI developers

100.0% 2 Modality, Timbre 15.0%

Non-mobile application UI developers

80.0% 1 Sustained Sound 31.9%

Mobile application UI developers 66.7% 3 Octave, Sound Masking, Timbre 25.8%

Figure 16 - Summary of percentage of correct responses within each respondent subset (at 1.0 level of correctness)

From the above table, it can be seen that once again, there is little to differentiate the musical and acoustic groupings in terms of the average percentage of respondents in each group to completely correctly define any given term (although respondents who play or have played a musical instrument and respondents who have received formal musical tuition faired marginally better). As with the previous results, the largest relative average group percentage for completely correct definition of any term falls within the audio-enhanced GUI builders - on average, 39.8% of respondents in this group provided a completely correct definition for any given term. The respondents who performed worst in this respect are those in the mobile application UI development groups - that is, both expert and non-expert developers of user interfaces to mobile applications (15.0% and 25.8% respectively). Interestingly, those terms which received maximum correct definitions within each group do not readily reflect the group's characteristics. For example, the musical groups provided most accurate definitions for terms which are acoustical software engineering in nature (sound habituation for example) whereas most mobile application user interface developers were able to correctly define musical terms (octave and timbre). Unfortunately, no explanation for this observation is forthcoming from the results of the study. 3.5 General Feedback To finish, the questionnaire asked respondents for general feedback regarding their personal opinions of, and questions they would like answered about, the use of audio-enhancement in user interfaces to software applications. As part of this investigation, respondents were asked to outline the level of detail regarding the practical issues surrounding audio-enhancement of software application user interfaces that they would like to be presented with if/when they were faced with using a toolkit of audio-enhanced widgets to design and develop a user interface that was enhanced with sound (see Appendix A). Respondents were given a scale by which to answer this question, as follows: (1) I don’t want to know about the sounds, I just want to use the widgets; (2) I want to use the widgets as they are but control the

page 21

Page 22: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

sounds added to them without needing to understand sound/acoustics; (3) I want to build new widgets from scratch and add them to the toolkit without needing to understand sound/acoustics; and (4) I want to know more about acoustics and the way in which sounds are used to enhance user interface widgets so that I can develop my own widgets from scratch and include them in the toolkit. Respondents were free to select one or more of these categories. The results are shown in the following table in which the listed categories correspond to the numbered descriptions above.

Number of Respondents

Category 1 5 Category 2 13 Category 3 4 Category 4 8

Figure 17 - Respondents' self-categorisation in terms of desire for technical knowledge when using a toolkit of audio-

enhanced widgets for user interface design and development Figure 17 illustrates that half of the respondents want to use the widgets in the toolkit as supplied but, without the need for detailed acoustic knowledge, to be able to control the sounds used by the widgets (category 2). One third of the respondents want to acquire full knowledge of the way in which sounds are used to enhance user interface widgets so that they can develop customised widgets for inclusion in the toolkit (category 4). However, it is interesting to note that almost 40% of these respondents also reported a desire to use the toolkit of widgets as outlined in category 2. Only 50% of the respondents who reported a desire to build new widgets from scratch without the need for detailed acoustic understanding listed this as their only categorisation - the remaining 50% classified themselves in an additional 2 or 3 of the other categories as well. Approximately 11% of respondents only classified themselves as falling into category 1 - that is, using the widgets as they stand and requiring no need for knowledge of acoustics. This amounts to 3 out of the 5 respondents who placed themselves in category 1 - the remainder of the respondents in this category also classified themselves in at least one of the other available categories. Only 2 out of the 26 respondents claimed that they would never be in a position to use the facilities provided by a toolkit of audio-enhanced widgets. These findings suggest that, amongst user interface developers, the greatest demand with respect to a toolkit of audio-enhanced user interface widgets is to be able to use the widgets without needing to achieve a detailed understanding of acoustics but to be able to modify the sounds used by the widgets. That said, a small proportion who stated that they would use the widgets in this manner, also classified themselves as developers who would like to acquire a more detailed understanding of acoustics in order to contribute customised widgets to the toolkit. Overall, these results illustrate that user interface developers are likely to want to approach a toolkit of audio-enhanced widgets in different ways depending on their intended or required use of the toolkit. Hence, any documentation supplied with such a toolkit will have to cater to differing entry and usage levels in order to meet the varying needs of developers. When asked to express their opinion regarding the use of sound in user interfaces, respondents varied from enthusiasm to reluctance. Whilst some recognised that sound can be used to convey additional information that is not easily conveyed in the visual modality, others were convinced that it did nothing to significantly improve functionality (although they did suggest that its inclusion leads to more 'fun' interaction). One respondent stated that:

"while I can see that in specific circumstances (especially assistive technology) thoughtful use of sound could be an excellent way forward, in most cases I would be reluctant to use it. Too many web sites are already making thoughtless, if not mindless, use of sound. In many cases I regard it as pollution."

This was a commonly held opinion, with other respondents stating that sound should only be used where it demonstrably enhances the user experience and still others agreeing that sound is very rich with information and that it is therefore a valuable form of feedback in any interface but warning that it can be

page 22

Page 23: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

"extremely annoying" when it is obtrusive and uncontrollable - one went so far as to state that it is easy to build a "ghastly interface" simply by "throwing in sounds" and that that was why most people switch off sounds when using their PC. Several respondents acknowledged the significance of sound in software user interfaces for the blind whilst others, looking to the future, recognised that sound will become significantly more important within user interfaces as new types of interaction require 'eyes-free' functioning. One respondent stated that:

"Any decent interface should make good use of sounds. Trouble is, we still have a lot to learn about what is good use of sounds."

The above quote is very succinct in its summation of the current situation with respect to the use of sound in software application user interfaces. As this survey has shown, the level of accurate acoustic knowledge is typically very limited amongst user interface designers. As reflected in the opinion of the respondents, the current use of sound in user interfaces is generally considered "annoying" and, whilst it does have the potential to add significantly to user experience, it is presently perceived as doing quite the opposite. Although many respondents were in agreement regarding the potential importance of sound in user interfaces, the manner in which it is currently being used - most likely as a result of lack of knowledge and experience on behalf of the user interface designers - is giving audio-enhanced user interfaces a 'bad reputation'. This situation heightens not only the need for well documented and well designed facilities by which user interface developers can include audio-enhanced widgets within their designs but also the need for guidelines to support user interface developers in their use of these widgets. When asked to identify any questions they had concerning the use of sound in software application user interfaces, respondents listed questions such as: what additional use would sound be - is it considered assistive technology?; how effective are hybrids of earcons and auditory icons?; are earcons suitable for representing complex concepts?; if users require to use headphones when using an audio-enhanced user interface, how does this affect the range of movement away from the system that is required by the user - can the user talk on the phone or hear a person nearby speaking while using the system?; will the sounds annoy the user - if so, how long before they do?; will users ignore the sounds and what are the consequences of ignoring them?; will users disable sounds and what might be the consequences of this action?; if the speaker malfunctions, how will the system signal this to the user?; what is the user's environment sound level/distraction level/distance from the device to the ear, and will sounds on one system annoy users of nearby systems?; will multiple devices making sounds simultaneously make users confused or anxious?; how can a system's sounds be uniquely identified if multiple systems are present?; will systems' sounds create social problems or security problems for users?; and how far from the issuing system will the sounds be heard? Additionally, one user stated that:

"I'd also want to know a lot more about recognition and the psychology of sound if you like - there has been lots of work done on visual systems, eye scanning, memory etc. I'd like to know something about the audio equivalent before I could feel confident about using sound properly."

These questions illustrate the type of issues surrounding the use of sound for which respondents would like clarification. Although current research is not yet in a position to address all of these issues, much of the work done on the Audio Toolkit can provide at least an initial response to these questions. Hopefully by documenting a response to as many of these issues as possible, the Audio Toolkit will be able to allay some of the typical audio-related concerns expressed by user interface designers and in so doing enable these designers to make better and more confident and informed use of sound in their user interface designs. 4. SUMMARY AND CONCLUSIONS This survey was undertaken to acquire the kind of information that would enable the design and authoring of documentation and guidelines for the Audio Toolkit that would optimise its accessibility and thereby its effective use. Based on a series of hypotheses concerning typical knowledge and experience of acoustics

page 23

Page 24: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

and music amongst user interface developers, the survey investigated software engineers' extent of user interface design and development experience, their level of acoustic knowledge and their level of musical knowledge in order to determine what (if any) assumptions may be made about software engineers' prior levels of understanding of acoustics and/or music and to identify any relationships between developers' understanding of the aforementioned and their user interface design activities. To summarise the most important findings with respect to the generation of documentation to support the Audio Toolkit, it was discovered that: (1) design and development of audio-enhanced user interfaces is not a common activity; (2) the visual paradigm (text and graphics) remains predominant in user interface design - especially across mobile application user interfaces - and audio commands a larger place in non-mobile application user interfaces than in user interfaces to mobile applications; (3) the average self-assessed level of acoustic knowledge amongst developers is very low; (4) the majority of developers of audio-enhanced user interfaces have to resort to generating their own sounds for use in their designs - sounds that are generated by third party software engineers do not currently appear to meet the needs of these developers; (5) as expected, the greater a developer's practical experience of sound generation, the higher his/her self-assessed acoustic knowledge level - their average acoustic knowledge level is therefore considerably higher than user interface developers who avoid the use of audio; (6) the mobility of applications for which developers are designing user interfaces has no impact on the developers' level of acoustic knowledge; (7) there is little overlap between developers' sound generation activities in terms of user interface use and non-user interface use; (8) developers are more experienced musically than acoustically - musical ability of some description is common place - and where developers exhibit strong musical ability, they report higher average levels of acoustic knowledge; (9) few of the developers designing audio-enhanced user interfaces have formal musical qualifications to back up their activities - less than one third of developers who generate sounds for use in user interfaces have musical qualifications to support this process; (10) developers' ability to read sheet music has a strong impact on their self-assessed level of musical experience - more so than their ability to play a musical instrument; (11) despite their audio-based activity, the majority of audio-enhanced user interface developers have no formal musical background; (12) more than one third of respondents have musical qualifications; (13) developers are generally unfamiliar with acoustical software engineering and musical terminology - especially acoustical software engineering; and (14) in general, depending on circumstance, developers want to either use the toolkit widgets as they stand with the ability to alter the sounds used or to attain a complete understanding of audio in order to develop and include customised audio-enhanced widgets. On the basis of the findings of this study the following should be considered essential when authoring accessible documentation (including guidelines) to support the effective use of the Audio Toolkit.

Since the level of acoustic knowledge/experience amongst user interface designers is typically low, all documentation supporting the Audio Toolkit needs to be targeted such that it is accessible to developers with little or no experience of acoustics;

Although more common place than acoustics, little can be assumed about developers' formal musical background (including their ability to read sheet music). However, since the ability to read sheet music has been shown to have considerable affect on developers' knowledge and experience levels (both acoustic and musical) ensuring a basic understanding of written music is likely to be beneficial to the acceptance and understanding of the Audio Toolkit. Given the extent to which musical knowledge at some level is common place amongst software developers, it should be relatively straight forward to include a basic explanation of written music within documentation accompanying the Audio Toolkit.

Given the lack of accurate understanding of acoustic and musical terminology (especially acoustic) amongst user interface developers, when authoring documentation supporting the Audio Toolkit care should be taken to monitor the use of specialised terms. Terminology should be bias towards music rather than acoustics (given the greater level of familiarity of musical as opposed to acoustic terminology amongst software developers) and any terminology used should be explicitly defined.

page 24

Page 25: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

Substantiating the need, not only for the Audio Toolkit but also its supporting documentation and guidelines, it has been highlighted that the majority of developers who generate sounds for use in user interfaces have no formal musical qualifications16. Hence, accessible guidelines are absolutely essential

In light of the fact that developers have expressed a desire to use the Audio Toolkit in different ways - each of which requires a different level of understanding of audio -documentation and guidelines supporting the Audio Toolkit must be structured in such a way as to accommodate different levels of use and thereby different requirements for knowledge.

Prior to this study, no investigation had been conducted into the audio-related background knowledge and development activity of user interface developers. Although it has focussed on the implications of the aforementioned information upon the design and authoring of documentation to support the release of a toolkit of audio-enhanced widgets, this report has presented the results of one such study which serves as a placeholder for the current state of relevant knowledge and development activities pertaining to audio-enhanced user interface design and development amongst typical user interface developers. As audio-enhanced user interfaces become as common place as graphical user interfaces, these knowledge and experience levels will undoubtedly increase. However, until such time as this is the case, user interface developers need to be well supported in their evolving use of audio-enhancement of user interfaces; it is hoped that the information gleaned as a result of performing this study will enable the Audio Toolkit and its associated documentation to do just that.

16 Perhaps this, in the absence of acoustic knowledge, is a contributory factor to the general perception amongst software developers and users alike that audio-enhanced user interfaces are 'annoying' - see Section 3.5.

page 25

Page 26: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

REFERENCES Alty, J.L., (1995), Can we use music in human-computer interaction?, In M.A.R. Kirby, A.J. Dix and J,E,

Finlay (eds), Proceedings of HCI’95, (Cambridge University Press), pp. 409-423 Beaudouin-Lafon, M., and Conversy, S., (1996), Auditory illusions for audio feedback, In M. Tauber (ed.),

ACM CHI’96 Conference Companion, (ACM Press, Addison-Wesley), pp. 299-300. Brewster, S.A. (1997), Using non-speech sound to overcome information overload, Displays, Special issue

on multimedia displays. vol 17. pp 179-189. Brewster, S.A. (1998). The design of sonically-enhanced widgets. Interacting with Computers, 11(2), pp

211-235. Brewster, S.A. & Crease, M.G. (1997). Making menus musical. In Proceedings of IFIP Interact'97 (Sydney,

Australia), Chapman & Hall, pp 389-396. Brewster, S.A. and Crease, M.G. (1999) Correcting Menu Usability Problems With Sound. Behaviour and

Information Technology 18(3), 165-177. Brewster, S.A., Leplatre, G. and Crease, M.G. (1998). Using Non-Speech Sounds in Mobile Computing

Devices. In Johnson C. (Ed.) Proceedings of the First Workshop on Human Computer Interaction with Mobile Devices, (Glasgow, UK), Department of Computing Science, University of Glasgow, pp 26-29.

Computing Market Dynamics (2000), Report Number: CM00-05MC, August 2000,

www.instat.com/abstracts/pm/2000/cm0005mc_abs.htm Crease, M.G., Brewster, S.A. (1998). Making Progress With Sounds - The Design and Evaluation Of An

Audio Progress Bar. In Proceedings of ICAD'98 (Glasgow, UK), British Computer Society. Crease, M. and Brewster, S.A. (1999). Scope for Progress - Monitoring Background Tasks with Sound. In

Volume II of the Proceedings of INTERACT '99 (Edinburgh, UK) British Computer Society, 1999, pp 19-20.

Crease, M., Brewster, S.A. and Gray, P. (2000a). Caring, Sharing Widgets: a toolkit of sensitive widgets. In

Proceedings of BCS HCI2000 (Sunderland, UK), Springer, pp 257-270. Crease, M., Gray, P. and Brewster, S.A. (1999). Resource Sensitive MultiModal Widgets. In Volume II of

the Proceedings of INTERACT '99 (Edinburgh, UK) British Computer Society, 1999, pp 21-22. Crease, M., Gray, P and Brewster, S.A. (2000b). A Toolkit of Mechanism and Context Independent

Widgets. Design, Specification and Verification of Interactive Systems (Workshop 8, ICSE 2000), Limerick (Ireland), 2000, pp 127-141.

I.D.G. (2000), Hand Check: The Smart Handheld Devices Market Forecast and Analysis 2000-2005,

www.thestandard.com/article/0,1902,27326,00.html

page 26

Page 27: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

APPENDIX A: SOFTWARE ENGINEERING AUDIO-ENHANCED USER INTERFACES

Part 1 – Personal Details

1.1 Name:

1.2 Tel. No:

1.3 Fax. No:

1.4 E-mail Address:

1.5 Professional Qualifications & IT Related Experience:

1.6 Job Title:

1.7 Responsibilities/Rôle in Co:

As part of our research we may wish to further discuss with you some of the issues raised in this questionnaire. If you do not wish

to be contacted in the future, please tick the box:

Part 2 – User Interface Design & Development Experience

The following questions aim to determine your level of experience or specialisation in user interface design & development.

2.1 Have you ever or do you currently design & develop user interfaces for use on non-mobile devices ?

Yes

No

If ‘yes’ go to question 2.2; if ‘no’ go to question 2.6

2.2 Please rate your level of experience of graphical user interface design & development for non-mobile devices using the following scale from 1 - 5 where 1 is inexperienced and 5 is expert:

Inexperienced 1

2

3

4

Expert 5

2.3 Using which programming languages have you designed & implemented graphical user

interfaces for use on non-mobile devices? (please list)

Appendix A - Audio Questionnaire page 27

Page 28: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

2.4 Have you ever designed & developed graphical user interfaces which are enhanced with sound (beyond standard system-level beeps) for use on non-mobile devices?

Yes

No

If ‘yes’ please provide a brief

description of the user interface design(s) and the sounds used:

2.5 Have the graphical user interfaces for non-mobile devices which you

have designed & developed been for safety-critical applications? e.g. user interfaces for medical surgical equipment

Yes

No

2.6 Have you ever or do you currently design & develop user interfaces for

use on mobile devices (e.g. mobile phones)? Yes

No

If ‘yes’ go to question 2.7; if ‘no’ go to Part 3

2.7 Please rate your level of experience of user interface design & development for mobile devices on the following scale from 1 - 5 where 1 is inexperienced and 5 is expert:

Inexperienced 1

2

3

4

Expert 5

2.8 Using which programming languages have you designed & implemented users interfaces

for use on mobile devices? (please list)

Text

Graphics Audio:

standard non-speech i.e. standard system/error beeps

synthesised speech enhanced audio-feedback

e.g. the use of musical quality and/or everyday sound

2.9 Please indicate which of the following output modalities have been included in the user interfaces you have designed & developed for use on mobile devices?

Please provide a brief description

of the user interface design(s) and any sounds used:

2.10 Have the user interfaces for use on mobile devices which you have

designed & developed been for safety-critical applications? e.g. mob le medical equipmenti

Yes

No

Appendix A - Audio Questionnaire page 28

Page 29: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

Part 3 – Acoustic Experience/Knowledge

3.1 Please indicate your level of experience or knowledge of acoustics using the following scale

from 1 – 5 where 1 is inexperienced and 5 is expert: Inexperienced

1

2

3

4 Expert 5

3.2 Have you ever or do you currently design & generate sound(s) for

inclusion in user interfaces? Yes

No

If ‘yes’ please provide a brief

description of the sounds designed & their use in the user interface(s); if ‘no’ please go to question 3.4

3.3 Using which programming languages & other hard/software have you designed & generated

sound(s) for inclusion in user interfaces? (please list)

3.4 Have you ever or do you currently design & generate sound(s) for use

other than in user interfaces? Yes

No

If ‘yes’ please provide a brief

description of the sounds designed & their use:

Part 4 – Musical Experience/Knowledge

4.1 Please rate your level of technical experience or knowledge of music using the following scale

from 1 – 5 where 1 is inexperienced and 5 is expert: Inexperienced

1

2

3

4 Expert 5

4.2 Have you ever or do you currently play a musical instrument (including

voice)? Yes

No

if ‘yes’ please specify; if ‘no’ go to question 4.4

4.3 Have you ever or do you currently receive formal instrumental (or voice)

tuition? Yes

No

Appendix A - Audio Questionnaire page 29

Page 30: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

4.4 Do you have any formal musical qualifications (e.g. have you sat any music-related examinations either practical or theoretical)?

Yes

No

If ‘yes’ please specify; if ‘no’ go to question 4.5

4.5 Can you read sheet music? Yes

No

PART 5 – INTERPRETATION OF ACOUSTIC & MUSICAL TERMINOLOGY

Music and acoustic-related terminology is often confusing. The following questions aim to determine your interpretation of some key terminology in these fields. To this end, for each term please indicate whether you are familiar with it and if so, please provide a brief interpretation of its meaning as you understand it. Please do not refer to text books/dictionaries to answer these questions : you are not being tested on your knowledge but rather the ambiguity and/or familiarity of the terminology is being assessed.

5.1 TERM INTERPRETATION Acoustic Signature Unfamiliar

Familiar

Auditory Icon Unfamiliar

Familiar

Auditory Stream Unfamiliar

Familiar

Chord Unfamiliar

Familiar

Earcon Unfamiliar

Familiar

Harmony Unfamiliar

Familiar

Media Unfamiliar

Familiar

Modality Unfamiliar

Familiar

Motif Unfamiliar

Familiar

Octave Unfamiliar

Familiar

Pitch Unfamiliar

Familiar

Presentational Resource Unfamiliar

Familiar

Register Unfamiliar

Familiar

Appendix A - Audio Questionnaire page 30

Page 31: A SURVEY OF AUDIO-RELATED KNOWLEDGE AMONGST SOFTWARE …sonify.psych.gatech.edu/~ben/references/lumsden_a... · beneficial when used to enhance both graphical and non-visual human-computer

Semitone Unfamiliar Familiar

Sound Habituation Unfamiliar

Familiar

Sound Intensity Unfamiliar

Familiar

Sound Masking Unfamiliar

Familiar

Sustained Sound Unfamiliar

Familiar

Timbre Unfamiliar

Familiar

Tone Unfamiliar

Familiar

Transient Sound Unfamiliar

Familiar

PART 6 – GENERAL FEEDBACK

6.1 If you were required to use a toolkit of audio-enhanced user interface widgets (e.g. graphical

buttons, menus, listboxes etc enhanced with sound), into which of the following categories would you place yourself?

I don’t want to know about the sounds, I just want to use the widgets. I want to use the widgets as they are but control the sounds added to them without needing to understand sound/acoustics.

I want to build new widgets from scratch and add them to the toolkit without needing to understand sound/acoustics.

I want to know more about acoustics and the way in which sounds are used to enhance user interface widgets so that I can develop my own widgets from scratch and include them in the toolkit.

6.2 Please list any questions or issues

you have regarding the use of sound in user interfaces: e.g. if you were required to build auser interface that included advanced sounds, what questions would you like answered, what informa ion would you need?

t

What is your opinion regarding theuse of sound in user interfaces?

THANK YOU FOR COMPLETING THIS QUESTIONNAIRE

Appendix A - Audio Questionnaire page 31