usability problems - categorization and prioritization methodologies

16
Armen Chakmakjian HF750 – SP11 2/10/11 Usability Problems Categorization and Prioritization Methodologies Table of Contents Introduction: What is a usability problem?..............2 Categorization and Prioritization Methodologies.........3 Rubin and Chisnell Categorization and Prioritization.......3 Nielsen Categorization and Prioritization..................3 Wilson Severity Scale......................................4 Spool Characterization Exercises...........................4 Strengths and Weaknesses................................5 Summary and Conclusion..................................6 Works Cited.............................................9

Upload: armen-chakmakjian

Post on 29-Oct-2014

109 views

Category:

Documents


3 download

DESCRIPTION

Usability problems can be generally defined by the absence or the lack of achieving some quality that is desirable. These qualities seem to vary in definition and somewhat seem to correlate to the discipline from which the definer hails.

TRANSCRIPT

Page 1: Usability Problems - Categorization and Prioritization Methodologies

Armen Chakmakjian HF750 – SP11 2/10/11

Usability Problems

Categorization and Prioritization Methodologies

Table of Contents

Introduction: What is a usability problem?..................................................................2

Categorization and Prioritization Methodologies........................................................3Rubin and Chisnell Categorization and Prioritization........................................................3Nielsen Categorization and Prioritization...............................................................................3Wilson Severity Scale...................................................................................................................4Spool Characterization Exercises.............................................................................................4

Strengths and Weaknesses................................................................................................ 5

Summary and Conclusion................................................................................................. 6

Works Cited........................................................................................................................ 9

Page 2: Usability Problems - Categorization and Prioritization Methodologies

Armen Chakmakjian HF750 – SP11 2/10/11

Introduction: What is a usability problem?

Usability problems can be generally defined by the absence or the lack of achieving

some quality that is desirable. These qualities seem to vary in definition and somewhat seem

to correlate to the discipline from which the definer hails. For example, the recommended

text for this class defines the list of criteria (Rubin & Chisnell, 2008) as:

Usefulness Efficiency Effectiveness Learnability Satisfaction Accessibility

Rubin and Chisnell are well-respected and experienced usability specialists whose

areas of expertise are usability testing and consulting. Management and IT researchers such

as Drs. Monideepa Tarafdar and Jie Zhang use a slightly different set of criteria they cull

from existing “existing studies” (Tarafdar & Zhang, 2005):

Easy to use Challenging Relevant Visually Appealing Well laid out Use of multimedia

That last criteria, seems a bit out of place and specific, but the authors of that study posit

“Given the rapid advances in multimedia technologies, organizations are now increasingly

using multimedia elements for displaying and communicating audio and video information

through their websites.”

Jenifer Tidwell, a designer and programmer, defines usability by the use of patterns.

Programming in known patterns or paradigms allows for understandable methodologies to be

recognized and allows for performance improvement and maintainability. Applying these

same criteria to usable interfaces, Tidwell (Tidwell, 2005) defines usability patterns as

Safe Exploration Instant Gratification Satisfiying

2

Page 3: Usability Problems - Categorization and Prioritization Methodologies

Armen Chakmakjian HF750 – SP11 2/10/11

Changes in Midstream Deferred Choices Incremental Construction Habituation (Wilson, 1999) Spatial Memory Prospective Memory Streamlined Repetition Keyboard Only Other People's Advice

She further states: “an interface that supports these patterns well will help users achieve

their goals far more effectively than interfaces that don't support them”. (Tidwell, 2005).

While these expert lists are useful, they do not directly lend themselves to categorization

and ranking in a systematic way. Those frameworks are discussed further in this paper.

Categorization and Prioritization Methodologies

Many different categorization schemes are used for usability issues. For the most part

they attempt to achieve the same goal. For the purposes of this paper there were 4

representative models that were studied: Rubin & Chisnell, Nielsen, Wilson, and Spool.

Rubin and Chisnell Categorization and Prioritization

For Rubin and Chisnell categorize each issue based on 2 sets of criteria: Severity and

Frequency. First each issue is categorized into a severity bucket as seen in Exhibit 1. Each

of the issues is then categorized by its frequency of occurrence (see Exhibit 2), which takes

into account the percentage of total users affected and the probability that a user from the

affected group will experience the problem.

Ascertaining a problem's criticality is then a simple matter of adding the severity ranking

and the frequency ranking for that problem. (Rubin & Chisnell, 2008)

Nielsen Categorization and Prioritization

Jakob Nielsen describes a method of categorizing severity, which combines 3 factors into

a sort of mental math for the categorizers. These components are described in Exhibit 3. He

mentions “even though severity has several components, it is common to combine all aspects

3

Page 4: Usability Problems - Categorization and Prioritization Methodologies

Armen Chakmakjian HF750 – SP11 2/10/11

of severity in a single severity rating as an overall assessment of each usability problem in

order to facilitate prioritizing and decision-making.” (Nielsen, 1995) Nielsen also mentions

that the number of evaluators and the time needed to do the evaluation are less important than

the fact that the evaluators provide individual severity ratings independently. (Nielsen, 1995)

Wilson Severity Scale

Wilson contends that usability issues should be rated on the same scale as programming bugs.

(Wilson, 1999) Using a scale from 1 to 5 where a 1 categorizes the worst offense (See

Exhibit 6) Wilson describes a set of attributes (see Exhibit 5) “that should be considered

when rating the severity of usability issues.” (Wilson, 1999) Wilson also gives advice that

since this issues occur in the midst of a development process, that a usability engineer attend

any bug triage that is run by development or product management to bring the usability

perspective to that process. Another side effect would be to train development and product

management about usability issues. (Wilson, 1999)

Spool Characterization Exercises

Spool describes a technique rather than a rubric by which to rank which usability issues. He

recommends a KJ exercise. “The method, in less than 45-minutes, allows teams to come to a

democratic consensus on an answer, avoiding endless discussion for elements that turn out to

be unimportant.” (Spool, 2007). The point of this exercise is the allow the team to both

categorize and prioritize issues as a side effect of agreeing to the categorization of issues that

answer questions like as “What are the most important usability problems we need to fix in

this version of the design?” or “Which user populations are most important to our business?”

(Spool, 2007)

For the purposes of this exercise, I also looked at a detailed summary of the Dumas

and Redish method (Swain, 2010), but found no particular significant differentiating factor

which compelled me to add it to this analysis.

4

Page 5: Usability Problems - Categorization and Prioritization Methodologies

Armen Chakmakjian HF750 – SP11 2/10/11

Strengths and Weaknesses

Any techniques that allows a team to both make data-driven decisions and communicate

an understandable decision making process is the strongest solution. Of the four techniques,

the Spool technique is the strongest when questions or team dissention is evident. The

weakness of this technique is that the playing field keeps changing depending on many

factors such as team composition and business pressures.

The Wilson severity scale while using what appears to be a data driven model, uses a

priority system that is in reverse ordinal order. This is very common in bug tracking systems.

So for example a P1 issue (aka Priority 1) expresses to the team that this is the first thing they

should be working on. The attributes that Wilson asks the teams to use (Exhibit 5) are

relatively generic and map quite nicely to both front end and back end software issues. This

means that development engineers, product managers and usability engineers can use the

same words and values to rank a usability issue.

The Nielsen methodology is quite understandable, and at first blush makes complete

sense. However, because the 4 factors don’t individually map to a ranking system but require

the team analyzing the issue to feel their way to an overall ranking, I deem this less than

rigorous method. At a minimum it requires team members with such experience or at least a

history of working together that they can agree on how the 4 factors add up to a ranking.

Finally, Rubin and Chisnell method has many of the attributes that make logical sense to

a team. For example, more important issues get a higher number ranking rather than in

reverse order like in Wilson. Also this method allows weighting of the ranking by inclusion

of the frequency as one of the two factors. Where this method breaks down is that in the

description of how to calculate frequency (Rubin & Chisnell, 2008), it once again suggests

inexact math as how to map to a frequency severity rating by attempting to combine

frequency of occurrence and potential population affect. While experienced teams may be

able to coalesce around this calculation, it might be better to break those two numbers out so

5

Page 6: Usability Problems - Categorization and Prioritization Methodologies

Armen Chakmakjian HF750 – SP11 2/10/11

to account for the variance of team member expertise. Ideally system that requires someone

to know something that isn’t on the table at the time of analysis is less than ideal where team

composition may vary.

Summary and Conclusion

Many experts attempt to provide frameworks for measuring the usability of a

particular solution. In real project work, only the ones that allow a data-driven methodology

are really useful when trying to prioritize that need to be addressed.

In this paper 4 representative models of ranking were studied: Rubin & Chisnell

(Rubin & Chisnell, 2008), Nielsen (Nielsen, 1995), Wilson (Wilson, 1999), and Spool

(Spool, 2007). Each has strengths and weaknesses. In my years of experience in the

software industry, I offer that any system that most closely matches the bug prioritization

schemes that are in use on software teams would be the most easily palatable by a cross-

functional team. Therefore, my selection of the most desirable methodology to use for

usability issue severity categorization and ranking is the Wilson method. I will hold out that

on small teams with little process or on teams where there are significant areas of

disagreement, the Spool recommendation for using a KJ analysis is a tool to have available in

those dire situations.

Ultimately, as Rubin and Chisnell point out (Rubin & Chisnell, 2008) true usability is

invisibility. If something gets done without the user knowing, satisfies their expectation

without extra effort, gives them a pleasant feeling of accomplishment or some combination of

those three, then you have avoided usability problems in an area.

6

Page 7: Usability Problems - Categorization and Prioritization Methodologies

Armen Chakmakjian HF750 – SP11 2/10/11

Exhibit 1: Rubin and Chisnell Usability Issue Severity Categorization

Severity Ranking

Severity Description Severity Definition

4 Unusable The user either is not able to or will not want to use a particular part of the product because of the way that the product has been designed and implemented.

Example: Product crashes unexpectedly whenever it is powered on at altitude.

3 Severe The user will probably use or attempt to use the product, but will be severely limited in his or her ability to do so. The user will have great difficulty in working around the problem.

Example: Synchronizing the device to another device can only happen when certain files are not in use. It isn't obvious when the files are in use.

2 Moderate The user will be able to use the product in most cases, but will have to take some moderate effort in getting around the problem.

Example: The user can make sure that all complementary applications are closed while syncing the two devices.

1 Irritant The problem occurs only intermittently, can be circumvented easily, or is dependent on a standard that is outside the product's boundaries. Could also be a cosmetic problem.

Example: The message area of the device's small screen is at the very top, dark blue, and often shaded by the frame of the screen.

(Rubin & Chisnell, 2008)

Exhibit 2: Rubin and Chisnell Usability Issue Frequency Ranking

Frequency ranking

Estimated frequency of occurrence

4 Will occur >90% of the time the product is used

3 Will occur 51-89% of the time

2 Will occur 11-50% of the time

1 Will occur <10% of the time

(Rubin & Chisnell, 2008)

Exhibit 3: Nielsen Usability Severity Factors

Usability Factor DescriptionFrequency Is it common or rare?Impact Will it be easy or difficult for the users to overcome?Persistence Is it a one-time problem that users can overcome once they know

about it or will users repeatedly be bothered by the problem?Market Impact Effect on the popularity of a product, even if they are "objectively"

quite easy to overcome.(Nielsen, 1995)

7

Page 8: Usability Problems - Categorization and Prioritization Methodologies

Armen Chakmakjian HF750 – SP11 2/10/11

Exhibit 4: Nielsen Usability Issue Severity Rating Scale

Severity Rating Description0 I don't agree that this is a usability problem at all1 Cosmetic problem only: need not be fixed unless extra time is available

on project2 Minor usability problem: fixing this should be given low priority3 Major usability problem: important to fix, so should be given high

priority4 Usability catastrophe: imperative to fix this before product can be

released(Nielsen, 1995)

Exhibit 5: Wilson Usability Attributes To Consider For Severity Ranking

Usability Attribute Description

Performance Performance is often a primary usability attribute. Performance can be poor (e.g. searching a database takes several minutes to complete) or too good (e.g. autoscrolling can be difficult on a fast machine).

Probability of loss of critical data An example would be the wrong default button on the confirmation message 'Do you want to delete this message?'. Choosing 'Yes" for the default button would be very serious because it would cause the loss of the entire database.

Probability of error What is the impact of the error on time, money, reputation (for example, you make the default 'reply to all' and your whole team gets a message meant for only one person).

Violations of standards Some companies have user interface standards. Violation of standards at one company I worked for was a high-level usability bug even if the actual violation was not too severe. Standards are mandatory so violations of standards usually carry a higher penalty than a violation of guidelines.

Impact on profit, revenue, or expenses A high-volume data-entry system should be optimized for the keyboard. Forcing a user to switch from the keyboard to the mouse repeatedly could easily slow input down and increase expenses. For high-volume input, too many keyboard-mouse transitions could be a severe usability problem.

Aesthetics, readability, clutter Does the user cringe when the screen comes up? Can the user find information in dense screens? This is sometimes a hard attribute to judge, but for Web pages and GUIs this is a

8

Page 9: Usability Problems - Categorization and Prioritization Methodologies

Armen Chakmakjian HF750 – SP11 2/10/11

serious issue.(Wilson, 1999)

Exhibit 6: Wilson Usability Severity Scale

Severity Level Description

Level 1 Catastrophic error causing irrevocable loss of data or damage to the hardware or software. The problem could result in large-scale failures that prevent many people from doing their work. Performance is so bad that the system cannot accomplish business goals.

Level 2 Severe problem causing possible loss of data. User has no workaround to the problem. Performance is so poor that the system is universally regarded as 'pitiful'.

Level 3 Moderate problem causing no permanent loss of data, but wasted time. There is a workaround to the problem. Internal inconsistencies result in increased learning or error rates. An important function or feature does not work as expected.

Level 4 Minor but irritating problem. Generally, it causes loss of data but the problem slows users down slightly, minimal violations of guidelines that affect appearance or perception, and mistake that are recoverable.

Level 5 Minimal error. The problem is rare and causes no data loss or major loss of time. Minor cosmetic or consistency issue.

(Wilson, 1999)

Works Cited Nielsen, J. (1995). Severity Ratings for Usability Problems. Retrieved 02 06,

2011, from Useit.com: http://www.useit.com/papers/heuristic/severityrating.html Rubin, J., & Chisnell, D. (2008). Handbook of Usability Testing (Second ed.).

Indianapolis, Indiana, USA: Wiley Publishing, Inc. Spool, J. (2007, 08 24). Resolving Group Name Differences in a KJ Analysis.

Retrieved 02 06, 2011, from User Interface Engineering: http://www.uie.com/brainsparks/2007/08/24/resolving-group-name-differences-in-a-kj-analysis/

Swain, A. (2010). Categorizing Usability Problems. Class report, Bentley, HF, Waltham.

Tarafdar, M., & Zhang, J. (2005). ANALYSIS OF CRITICAL WEBSITE CHARACTERISTICS: A CROSS-CATEGORY STUDY OF SUCCESSFUL WEBSITES. The Journal of Computer Information Systems , 46 (2), 14-24.

9

Page 10: Usability Problems - Categorization and Prioritization Methodologies

Armen Chakmakjian HF750 – SP11 2/10/11

Wilson, C. (1999, 04). Usability Interface: Reader's Questions: Severity Scales. Retrieved 02 06, 2011, from The Usability SIG Newsletter: http://www.stcsig.org/usability/newsletter/9904-severity-scale.html

10