“Ensuring successful CRM implementations – can Model Driven QA make a difference?”
A Webinar by eBay and Cognizant
7th October 2010
2
Survey - 1
tick one of the following options:
no, we were expecting it to go smoother
Q. If your organization has undertaken Siebel implementation in the last 2 years, are you satisfied with the way it was implemented?
yes, it went without major issues:
:
Satisfaction Levels for CRM Projects
3
Source: “Answers To Five Frequently Asked Questions About CRM Projects, a report by Forrester Vice President and Principal Analyst, Bill Band published in 2008
• 2001 Gartner Group: 50%• 2002 Butler Group: 70%• 2002 Selling Power, CSO Forum: 69.3%• 2005 AMR Research: 18%• 2006 AMR Research: 31%• 2007 AMR Research: 29%• 2007 Economist Intelligence Unit: 56%• 2009 Forrester Research: 47%
Source: “CRM Failure Rates: 2001-2009”, blog by Michael Krigsman on ZDNet
Statistics on failed CRM projects, assessed by leading analyst firms
*Please agree or disagree with the following statement:“Business results anticipated from the implementation were met or exceeded”
eBay Speaker Introduction
4
Steve Hares - Senior Quality Engineering Manager, Release ManagereBay - Customer Support Technology Solutions
Steve Hares is the Senior Manager of Quality Assurance for eBay’s Customer Support Technologies Group. He is responsible for the quality metrics for all software that is delivered to eBay’s customer support agents. In the last 6 months Steve was instrumental in establishing the partnership with Cognizant that achieved all quality metrics for deployment to production of eBay’s new CRM solution. Before eBay Steve has been both a Product Development manager and QA manager for Avaya, Lucent, Ascend Communications, and a host of small start ups. email: [email protected]
5
Cognizant Speaker Introduction
Rajarshi Chatterjee (Raj), Director and Head of CSP-TestingRaj, heads Customer Solutions Practice - Testing (CSP-Testing), a group was incubated early in 2009 as a new horizontal that combines the expertise of Cognizant’s Testing practice and Customer Solutions Practice (CSP). With 400+ associates, CSP-Testing specializes in testing of CSP applications in the CRM, BPM & CDI space.
email: [email protected]
6
About eBay and their CRM Program
About eBayFounded in 1995, eBay connects millions of buyers and sellers globally on a daily basis and is the world's largest online marketplace. Our subsidiary PayPal enables individuals and businesses to securely, easily and quickly send and receive online payments. We also reach millions through specialized marketplaces such as StubHub, the world's largest ticket marketplace, and eBay Classifieds, which together has presence in more than 1,000 cities around the world. In 2009, eBay realized $9B. The total worth of goods sold on eBay was $60 billion -- $2,000 every second, and we have 92 million active users at present
About the Unify programThe Unify Program was initiated to make eBay’s customer support and service the best in the industry. We wanted to ensure that user experience is the best at all times - from the moment a user (buyer or seller) raises a request, it is researched, till it is resolved.
What was this Program about?
7
Simplifying the service management platform and making it more scalable
• Multiple applications • Multiple definitions & answers• Poor data quality• No global view
CSR
Buyer&
Sellers
Chat
CSI
iPOP
PDA
SAP
SoD
AD
IVR
SFDC
eWFM
Web
form
s, E
mai
l, Ch
at, P
hone
Regi
onal
Regi
onal
DW
Before Unify Program
Buyer&
Sellers
• Single Case management system, globally• Fewer applications, better integrated• Fewer data silos; hence consistent data
Channels Activities
Content Regions
Enterprise agent tool(case & content mgmt)
Data
war
ehou
se
Application
Application
CSR
Opportunities(e.g., CRM)
Web
form
s, E
mai
l, Ch
at, P
hone
After Unify Program
Key Program Objectives
8
Measured KPI
Reduced Transfers
First Contact Resolution
Consistent Global Process Adoption
System Retirement
Agent Utilization Rate
Average Handle Time
Maintenance / Supportability
End State Alignment
Integrated Case Contact History
Accurate Content
Consistent member
experience
Agent efficiency /
accuracySimplified
Technology stack
Cost Factors
Benefits
Key Metrics
Enablers
Enhanced Reporting
NPS
Resolution
Cost
Goals
A program of this magnitude was not without its own risks!
Risk Assessment
9
DimensionsStability of requirements
Prone to performance issues, based on past experience
New product - not enough familiarity with it
Impact of customization
Critical link in the chain that could be a single point of failure
Unique to eBay environment
Testability
60+ functional
components evaluated
along above dimensions & Level of
Effort
User, Accounts, Contact Man-agement
Operational Applications - Integration
Online Channels - Telephony & Chat
Service Requests Management / Life cycle
Online Channel Integration
0% 10% 20% 30% 40% 50% 60% 70% 80%
Over 60% of the application was scored at High or Moderate Risk. This meant some critical decisions had to be taken right at the beginning
Risk Score of key solution components
10
Key Decisions
Have distinct systems of record, with minimal overlap between data silos.
Iterative approach to development and QA. New code every 2-3 weeks
Innovate but know early if something is not going to work, through Proof Of Concepts
Identify test data needs early – Have a focused team working on test data
Base-line application performance with each build
Test early, and test often
1
2
3
4
5
6
MOST IMPORTANT: Bring in expertise where needed – Select the Right Partners!
11
Key Decisions – Selecting the Right Partner
Rigorous process for Vendor selection Defined 33 criteria that had clear objectives Weighted the different criteria based on priority and importance Set a target score for each criteria Each vendor rated by all members of a panel to derive weighted scores
Company
#1Company
#2 Cognizant
Criteria What it included Weighted Score
Weighted Score
Weighted Score
RFP Process (i) Level of detail (ii) transparency 100% 133% 133%
Expertise (i) Prior track record in large Siebel QA, CTI, CCA (ii) knowledge of specific tools 65% 82% 94%
Methodology(i) Approach specific to each technology component, (ii) test data preparation (iii) onsite-offshore model
67% 117% 117%
Cost (i) Professional Services (ii) Value-adds / tools from vendor (iii) Infrastructure
92% 100% 100%
Firm (i) Prior experience working with eBay, (ii) Alliance with product OEM vendor 100% 88% 119%
Requirements
(i) Understanding of requirements and (ii) Ability to address change 51% 106% 107%
Critical Differentiator
Model Driven approach to QA
Experiences by the SI Partner
Speaker Change at this slide
13
eBay Site
Avaya
CCA
CTI
AssgnMngr
Siebel
InQuira
InQuira returns potential solutions / templates based on
case context
Account and Contact Creation
Member Login and Verification
Incoming Request(Phone)
Incoming Request(Chat)
SR creation / classification
Phone session initiation
IVR Interaction
Assignment to agent
Incoming Request
(Web form)
Assignment to agent
Chat session initiation
Member Login and Verification
Incoming Request(email)
Assignment to agent
Siebel passes case context to
InQuira
Agent resolves case
Agent closes case
Assignment to agent
Agent researches case
Channel independent
Overview of the Service Request Workflow
Phone
Chat
Web form
14
What made this Implementation Complex?
1. Unlike most other Siebel implementations, by design no user transaction is fully executed from start to finish within Siebel.
2. These other applications were also being developed at the same time as Siebel, i.e. very little ability to test any one system in the presence of other systems
Dec Jan Feb Mar Apr May June
Block 1: Siebel & Portal customization
Block 2: Site integration
Block 3: E-mail and web channel integration
Block 4: Phone & chat integration
Foundational design
Q4 Q1 Q2
In short, we had to think how we would test multiple applications, while they were still on the drawing board.
The inability to replicate all of these applications in the QA environment at the same time, necessitated an innovative approach.
That approach was Modeling!
Model driven approach to QA3 Components of Model driven testing
Modelling for Test Data
preparation• For Functional testing • For performance testing
Modelling for Functional Testing
QA Approach
• Ensuring exhaustive coverage• Regression testing• Risk based testing• Test Driven Development
Modelling for Performance
Infrastructure sizingSingle and multi user Load and
Performance Testing
15
Model driven approach to QA
Modelling for Functional Testing Ensuring exhaustive coverage
Keeping test scenarios in synch with an evolving application
Repeated testing of “weak-links” in the chain
Test driven development – alerting before is better than detecting later
Core
Goals for Model driven testing
16
Modeling in 3 Easy Steps using ADPART
Create process flow diagrams in ADPART or import from Visio
Input • Pre-conditions• Triggering
events• User input
Process details
• Tasks to be executed
• Rules• Parameters that
determine outcome
Output • Expected response• Messages /
Notifications• Triggers to start /
stop other processes
Render Scenarios automatically Set variables & data typeDefine Parameters for each step
1Define Business
Processes
3Generate Test
Scenarios
2
17
How did we exploit the model?Automatic comparison between 2 versions of business processes
Highlights differences due to:• Newly added steps• Modified steps• Deleted steps
Enables creation of multiple Test Suites
Enables forcing of specific paths, by modifying probability at decision nodes:• To test incremental functionalities• Assess risks from exception
situations
The model is based on “expectations” from the system; its efficacy depends on the richness of the data used to simulate those conditions. This makes Test Data preparation so important!
Probability Simulation
Regression Testing
18
Model driven approach to QAGoals for Model driven testing
Modelling for Test Data preparation
Creating reusable data sets for :
Functional Testing
Load & Performance Testing
Core
19
20
Data – a Look at all the Types Involved
Business defined data, that must be set-up initially
• Application Master data, i.e. drop-downs, list of values etc.
• User data (user groups, roles, and sample user logins)
• Application settings: User navigation rules i.e. IVR menu
tree Phone and Chat routing rules in
CCA, Assignment rules in Siebel Merge rules & de-duplication rules
(i.e. contacts and cases in Siebel)
Business defined data that is created regularly but changes infrequently
• Business entities, attributes and relationships
Customer Name, Profile, Contact information, Acct numbers, dummy credit card numbers
Transaction, event and rules driven data – data that changes very frequently
• Transactions (i.e. interactions through phone, chat, web-forms, emails etc.)
• System rendered, event driven updates i.e. history log, audit trail
• Calculated fields, values returned from API calls to other applications, data look-up from other sources (i.e. customer rating)
Environment settings
• Server settings (i.e. session time-out, no. of re-tries, page and memory settings)
• Network parameters and settings• Test specific settings, i.e. number of
concurrent users / sessions
Actual values
defined by SMEs
Vendor guidelines, extrapolated using tests results
Using the application, API calls & automation
Dummy values or cloned & masked data
21
Test Data : Challenges, Options & Results
o TDM tools require a source database
o Complex license cost estimation; overlapping legacy applications
o Many complex combinations to test (i.e. IVR menu options, bid items, service request types, agent skill types)
o Ensuring consistent data across applications (i.e. Access control, Site, Siebel)
o Ensuring coherent data across applications, to reflect real-life end to end scenarios
o Identifying boundary conditions exhaustively
o Simulating ageing of data
o Creating large volumes for load testingo Testing analytical reports
Challenges Results Achieved
Finally Chosen
Options Evaluated
o Exhaustive scenario coverage
o Data prepared for Training, QA and LnP environments
o More than 135 million records created for load testing
o Saved license cost of TDM tools
o Optim, Datamaker
o Datagenerator
o In-house developed tools i.e. ADPART & OATS
o Automation scripts created in Selenium
o Proprietary APIs from eBay
o Excel macros created by Cognizant Business Analysts
o Custom scripts for data migration & Siebel EIM
Model driven approach to QA
Goals for Model driven testing
Modelling for Load & Performance Testing Perform to scale, i.e. are expected response times met for
each of the transactions?
Scale to Perform, i.e. the extent to which the servers and applications can scale to support the max number of users without performance degradation
Core
22
23
Perform to Scale and Scale to PerformObjectives:
Infrastructure sizing validation for final production environment Creating a bench-mark data point for critical and common user transactions Identifying single points of failure and design limitations if any Estimating the optimal number of users that could be supported at peak-hours
Transactions LAN - San Jose
WAN1 - Dublin
WAN2 -Manila
Query for Service Request ID in MySR Screen 3.76 22.54 229.96Drill down to My eBay view for a user 3.52 9.87 26.50Drilldown on an User ID for More Info Details 2.30 7.85 18.51Go to Seller Activity View for an Item 1.80 4.15 9.60
Identifying and Benchmarking critical User Transactions
Where OEM Data fail? Per-user memory
requirement revised to 50Mb from 8Mb!
AOM & EAI Servers crashed
Nearly 200 million records created via Siebel EIM for performance test
LnP data volume 40% downsized compared to estimated in Production
Load Runner configured for 1350 concurrent Siebel & 500 CCA users
Tests simulated for users in Dublin and Manila, using SHUNRA WAN emulator
Results extrapolated for full data volume and 3600 concurrent users!
Load Simulation Approach
Sample Response Time
24
Perform to Scale and Scale to PerformPerformance compared for 2 different server models
M5K and T5240 to decide right database model for Unify
Memory and CPU Utilization
Production server sizing revalidated and capacity enhanced based on load test results
Response time for 91% transactions brought down to less than 1 sec
Desired server settings and database indexes determined for poor SQLs
Issues found with server configuration, CTI tool-bar, OM server
Determine optimal settings for Call Canter and EAI Object Manager component Tasks
Identification of optimal Load Balancer parameters
Value Additions
25
10 Learnings
Capture requirements through “Modeling “ to ensure lesser leakages during construction
Parameterize requirements in the form of variables that describe the model through the values they take
Determine the sequence of data loading as it is critical for creating consistent test data
Determine how data will change with time as it is essential for creating good test data
Automate simple use cases such as login or search from Unit testing stage to eliminate manual intervention
Plan parallel testing (integration & performance along with functional), if data silos have very little overlaps
Test “Compatibility of all patches” from the product OEM with all browsers, i.e. Google Chrome, FireFox
Maintain a checklist of environment settings, which must be verified before every deployment
Profile key transactions through load tests to identify potential bottlenecks
Validate hardware size through proper load testing
1
2
3
4
5
6
7
8
9
10
26
Questions?
27
Thank You!
Generating Test Scenarios with one click!
Defining Variables at each step