uci agents for collecting application usage data over the internet david m. hilbert david f....
TRANSCRIPT
UCI
Agents for Collecting Application Usage Data Over the Internet
David M. HilbertDavid F. Redmiles
Information and Computer ScienceUniversity of California, IrvineIrvine, California 92697-3425
{dhilbert,redmiles}@ics.uci.edu
http://www.ics.uci.edu/pub/eden/
UCI
Motivation• The behavior of interactive systems is complex
– the application
– its users
– the use environment
• Such factors are typically complex, dynamic, and poorly understood enough to be impossible to model effectively
• Thus, empirical evaluation of software in actual use situationsis critical
UCI
Impact of the Internet• On the positive side
– cheap, rapid, large-scale distribution of software for evaluation
– simple transport mechanism for evaluation data
– should make getting evaluation data easier
• On the negative side– reduces opportunities for traditional user testing
– increases variety of use situations and number/distribution of users
– exceeds capabilities of current techniques for collecting data
UCI
Problem• Prototyping, beta testing, usability testing help
– refine system requirements
– identify issues with system and user behavior
– evaluate usability and utility
• However– beta testers make poor data collectors (incentives, expertise, detail)
– usability testing limited (size, scope, location, duration)
• Instrumentation and event logging– low signal to noise ratio
– missing context
– lack of abstraction
– coupling of application and instrumentation
UCI
Our Research• Higher quality data with less restrictions on evaluation size,
scope, location, duration
• Features– Focused data collection
– Reduction and analysis prior to reporting
– Use of context in analysis
– Abstraction in analysis
– Separation of instrumentation and application
• A way to get answers to empirical questions such as– how is this application being used?
– how should development and testing resources be allocated?
– how can the design be improved to better match use?
UCI
Overview• Usage Expectations (Theory)
• Expectation Agents (Approach)
• Usage Scenario (Example)
• Agents and Architecture (Detail)
• Conclusions and Challenges
UCI
Usage Expectations• Usage expectations
– affect design decisions
– are embodied in design decisions
• Because most expectations are not represented explicitly– not tested adequately
– not always recognized by developers
• Bringing expectations into alignment is important– improve design
– improve use
– improve training and on-line help
– improve overall usability and utility
UCI
Expectation Agents• Developers design applications and create expectation
agents– agents encode knowledge about expected usage
• Agents are deployed to run on users’ computers– agents observe users as they interact with the application
– agents detect mismatches and collect data and user feedback
– agents report back to developers to inform evolution
• Agents support purposeful redesign of the application/agents
UCI
Examples• Feature usage statistics (enabling, disabling)
• Relative use of differing feature invocation mechanisms
• Feature okays, cancellations, undos
• Feature-related warning and error rates
• Use of help facilities
• Menu searching and selection behavior
• Sequence of inputs/actions (e.g. in form- or dialog-based tasks)
UCI
Usage Scenario• A transportation database query form
UCI
Agent Notification (Optional)• Agents may post messages
UCI
User Response (Optional)• Users may provide feedback
UCI
Repository for Review• Agent collected data and user feedback stored for
review
UCI
Anatomy of an Agent• Agents are instances of a simple Java class
– Trigger: patterns of events occurring in the UI (or agents)
– Guard: boolean expression involving state of the UI (or agents)
– Actions: arbitrary code or pre-supplied actions
• Agent operation– Triggers continually checked as users interact w/ the
application
– Guards checked if an agent trigger is activated
– Actions performed if guard evaluates to true
UCI
Agent Authoring• An agent that fires when the “mode of travel” section is
edited
} Trigger
} Guard
} Actions
Agents{
UCI
Event Specification• Detecting when the user selects “AIR” as the “mode of
travel”
Widgets{ Events}
UCI
EDEM Architecture
Agent Specs saved w/ URL
Development Computer
Java Virtual Machine
EDEMActive Agents
ApplicationUI Components
Top Level Window& UI Events
Property Queries
Property Values HTTPServer
DevelopmentComputer
AgentSpecs
EDEMServer
CollectedData
User Computer
Java Virtual Machine
EDEMActive Agents
ApplicationUI Components
Top Level Window& UI Events
Property Queries
Property Values
Agent Specs loaded via URL
Agent Reports sent via E-mail
UCI
Conclusions• Usage expectations
– help focus data collection
– raise awareness of implications of design decisions
• Agent-based architecture– data analyzed and reduced prior to reporting
– context used in interpreting the significance of events
– event modeling and analysis at multiple levels of abstraction
– independent evolution of instrumentation and application
• Combined– higher quality data with less restrictions on evaluation size,
scope, location, duration
– A way to get answers to important empirical questions
UCI
Challenges• Generalize event model
– JavaBeans, external events
• Improve authoring and reuse support– default agents, Wizards
• More flexible analysis and reporting– arbitrary computation and state, JDBC integration
• Better integration of expectations into the development process– existing design artifacts, guidelines
• Agent maintenance, configuration management, and versioning
• Security and privacy
UCI
For More Info• http://www.ics.uci.edu/pub/eden/
UCI
Expectations
UCI
Usage Expectations• Usage expectations
– affect design decisions
– are embodied by designs
• Usage expectations come from:– knowledge of requirements
– knowledge of application domain
– knowledge of user tasks, practices, and environments
– past experience developing and using applications
• Mismatches between expectations and actual use– may indicate problems with the design and/or usage
– may affect usability and/or utility
UCI
Characteristics of Expectations• Some expectations are represented explicitly.
– e.g. in requirements, task analyses, scenarios, use cases
• Most expectations are implicit.– e.g. encoded in window layout, toolbar/menu design, accelerator
key assignments, user interface libraries
• Examples:– “users complete forms from left to right and top to bottom”.
– “frequently used features are easy to access, recognize, and use”.
• Because most usage expectations are not represented explicitly, they often:– fail to be tested adequately
– fail to be explicitly recognized by developers
UCI
Resolving Mismatches• Detecting and resolving mismatches between developers'
expectations and actual usage can help improve:– design, automation, training, on-line help, and use
• Once detected, mismatches may be resolved in two ways:– Developers may modify their expectations to better match
actual usage, thus refining requirements and improving the design
• e.g., features expected to be used rarely but used often in practice can be made easier to access.
– Users learn about developers' expectations, thus learning to use the existing system more effectively
• e.g., learning they are not expected to type full URL's in Netscape's Communicator can lead users to omit characters such as "http://".
UCI
More Details
UCI
Triggers• Triggers specified in terms of the following patterns:
– “A or B or . . .”
– “A and B and . . .”
– “A then B then . . .”
– “(A and B) with no interleaving C”
– “(A then B) with no interleaving C”
• Where variables A, B, C are filled in by specifying an event and an event source
– e.g. “GOT_EDIT:AIR”the “AIR” choice button was selected
– e.g. “FIRED:Section 1”the “Section 1” agent fired
(Disjunction)
(Conjunction)
(Sequence)
(Conjunction w/ exclusion)
(Sequence w/ exclusion)
UCI
Guards• Guards are specified in terms of the following patterns:
– “A or B or . . .”
– “A and B and . . .”
• Where variables A, B are filled in by an expression involving the the state of a UI component or agent– e.g. “value = TRUE : AIR”
the “AIR” choice button is currently selected
– e.g. “count > 10 : Section 1 Reset”the “Section 1 Reset” agent has fired over 10 times
(Disjunction)
(Conjunction)
UCI
Actions• Actions may include arbitrary code, but usually involve
pre-supplied actions such as:– generating higher level events for further hierarchical
event processing
– interacting with users to provide suggestions and/or collect feedback, and
– reporting data back to developers
UCI
Integrating with EDEM• void initialize()
– load agents
• void addMonitors(Object obj)– recursively add monitors to this component and all
subcomponents
• void setName(Object obj, String name)– name any component to be monitored that doesn't have a
unique label
• void processEvent(Event evt)– pass events to EDEM for processing
• void finalize()– remove monitors and send log & summary
UCI
Related Work
UCI
Related/Supporting Technologies
• Related– Collaborative remote usability testing techniques
– Beta test data collection (e.g. Aqueduct Profiler)
– API usage monitoring (e.g. HP/Tivoli ARM API)
– Enterprise management (e.g. TIBCO Hawk)
– Model-based distributed debugging (e.g. EBBA & TSL)
• Supporting– Event notification systems (e.g. TIB/Rendezvous)
– Mobile agent infrastructure (e.g. ObjectSpace Voyager)
UCI
Collaborative Remote Usability• Collaborative video and electronic whiteboards allow
traditional usability testing to be done remotely.
• EDEM and collaborative remote usability techniques might be used independently or in concert depending on the application and evaluation goals.
• URL for information on remote usability testing:– http://hci.ise.vt.edu/~josec/remote_eval/
UCI
Collaborative Remote Usability• EDEM
– asynchronous
– non-intrusive
– quantitative behavioral & performance data plus user comments
– potentially large numbers of concurrent subjects
– ideal for large-scale, ongoing studies of usage
• Remote Usability– synchronous
– intrusive
– video capture of behavior & performance that can be reviewed later, plus verbal protocols
– single or small groups of subjects
– ideal for small-scale, focused experiments
UCI
Beta Test Data Collection• Aqueduct Profiler collects information over the Internet
about the usage of applications in beta test.
• Aqueduct provides an API for collecting application-specific information (e.g., feature usage) which is reported, via Email, along with other generic measures such as operating system, execution time, crashes, etc.
• EDEM and Aqueduct collect information that is both related and complementary, using techniques that are complementary.
• URL for Aqueduct Software:– http://www.aqueduct.com/
UCI
Beta Test Data Collection• EDEM
– developers define agents which may be modified and delivered separately from code.
– captures information about feature usage
– captures information about usability aspects more readily
– Java only
• Aqueduct– developers instrument code
requiring redelivery when instrumentation is modified.
– captures information about feature usage
– captures information about crashes more readily
– multiple platforms
UCI
API Monitoring• An application response-time measurement (ARM) API
allows data regarding usage of an API (as opposed to a UI) to be captured.
• Instruments all important API calls to indicate start of call, characteristics of parameters, and end of call.
• Information is used to identify performance bottle-necks and parameter and API usage.
• EDEM and ARM could be used independently or in concert depending on application & evaluation goals.
• URL for HP and Tivoli’s proposed standard:– http://www.hp.com/openview/rpm/arm/
UCI
API Monitoring• EDEM:
– collects information about UI usage
– developers define agents which may be modified and delivered separately from code.
– general UI events
• An ARM API:– collects information about
API usage
– developers instrument code requiring redelivery when instrumentation is modified.
– specific API events
UCI
Enterprise Management• Enterprise management tools help administrators
manage nodes within a wide area network by monitoring processes, CPU utilization, applications, network statistics, log files, and file system activity.
• Rule bases are (often) used to specify what to monitor and how to report and react to problems.
• An API allows developers to instrument applications to be monitored & controlled.
• URL for TIBCO’s HAWK Enterprise Monitor:– http://www.tibco.com/products/hawk_ds.html
UCI
Enterprise Management• EDEM:
– focuses on UI events
– use of agents to collect information and take actions
– exploits existing event model
• TIBCO’s Hawk:– focuses on network
monitoring & management
– use of agents to collect information and take actions
– comes w/ built-in agents to monitor specific operating systems and common applications, otherwise, API is used.
UCI
Distributed Debugging• Model-based distributed debugging techniques allow
specification and monitoring of abstract models of, or formal constraints on, the behavior of event-based concurrent systems.
• Techniques used to specify event patterns & computed properties are related.
• References:– P.C. Bates. Debugging heterogeneous distributed systems
using event-based models of behavior. ACM Transactions on Computer Systems. Vol. 13, No. 1, 1995.
– D.S. Rosenblum, Specifying Concurrent Systems with TSL, IEEE Software, Vol. 8, No. 3, 1991.
UCI
Supporting Technologies• Event Notification
– EDEM uses SMTP to asynchronously report agent-collected data.
– TIB/Rendezvous allows events to be synchronously reported based on a publish/subscribe paradigm.
– URL for TIB/Rendezvous:• http://www.rv.tibco.com/
• Mobile Agent Technology– EDEM uses HTTP to transport agents.
– ObjectSpace Voyager provides a more flexible and capable platform for agent mobility based on an agent-enhanced object request broker (ORB) paradigm.
– URL for ObjectSpace Voyager:• http://www.objectspace.com/