siebel test automation - cognizant · pdf filemulti-country siebel application rollout, automa...

6
Executive Summary Automation testing is finding favor in testing projects across various technologies and domains. It is becoming increasingly popular with Siebel applications because of its ability to reduce testing effort, which can contribute to a faster rollout/upgrade or lower maintenance costs for an already deployed application. This paper attempts to answer the following questions: What is the relative ease of automation testing of different areas of a Siebel application? What are the criteria for selecting appropriate functional test case candidates for automation? Given the fact that multi-country deployment of Siebel applications is fairly common, what are the salient points when considering automation testing for a Siebel deployment project? Testing of Siebel applications is among the fore- most challenges for clients planning on imple- menting Siebel in their IT enterprise. The fact that there are various modules of Siebel applica- tions (Sales Force Automation, Marketing, Field Service) combined with different types of client instances (Web client, wireless Web client, hand- held client, mobile Web client and dedicated Web client), creates a wide variety of testing scenarios for any Siebel implementation project. Post- implementation, there remains a challenge in terms of a disciplined testing effort for regular changes and upgrades. And because most large Siebel implementations span different geogra- phies and languages, testing can be even more complex. A question that is normally faced during any Siebel testing project is, “Can we automate Siebel testing to a large extent to reduce the test- ing effort and cost and enable test case reuse for future projects?” The selection of suitable areas of automation, given the various candidate areas from a typical Siebel implementation, is of paramount impor- tance in order to optimize the automation script development exercise. It is the objective of this white paper to identify suitable areas of automa- tion and provide guidelines on how the automat- ed testing should be approached. Relative Ease of Automation of Siebel Test Areas There are four broad testing areas for a Siebel application: Functional testing, analytics testing, data testing and interface testing. Careful consid- eration of each of the aforementioned areas leads to the following observations regarding automated test case suitability: Functional Testing This is carried out to validate the business appli- cation components of the system. Functional testing is conducted on units, modules and busi- ness processes in order to verify that the applica- tion is functioning as requested. This covers areas such as validating whether a module is per- forming correctly (e.g., activities are successfully created), Siebel workflows and assignment rules are working properly, and UI layout (i.e., look and feel, drop-down menus and test fields) is properly validated. Functional testing offers the maximum potential for automation of test cases. The fol- lowing are a few of the areas where automation testing may be explored: Field validation in multi-value groups and pick applets List of Values (LOV) validation Modularizing test steps that repeat across mul- tiple test cases (e.g., logon, screen navigation) Validation of multi-lingual label translations white paper | april 2011 Siebel Test Automation White Paper

Upload: lebao

Post on 23-Mar-2018

241 views

Category:

Documents


2 download

TRANSCRIPT

Executive Summary

Automation testing is finding favor in testingprojects across various technologies anddomains. It is becoming increasingly popular withSiebel applications because of its ability toreduce testing effort, which can contribute to afaster rollout/upgrade or lower maintenancecosts for an already deployed application. Thispaper attempts to answer the following questions:

• What is the relative ease of automation testingof different areas of a Siebel application?

• What are the criteria for selecting appropriatefunctional test case candidates for automation?

• Given the fact that multi-country deploymentof Siebel applications is fairly common, whatare the salient points when consideringautomation testing for a Siebel deploymentproject?

Testing of Siebel applications is among the fore-most challenges for clients planning on imple-menting Siebel in their IT enterprise. The factthat there are various modules of Siebel applica-tions (Sales Force Automation, Marketing, FieldService) combined with different types of clientinstances (Web client, wireless Web client, hand-held client, mobile Web client and dedicated Webclient), creates a wide variety of testing scenariosfor any Siebel implementation project. Post-implementation, there remains a challenge interms of a disciplined testing effort for regularchanges and upgrades. And because most largeSiebel implementations span different geogra-phies and languages, testing can be even morecomplex. A question that is normally faced duringany Siebel testing project is, “Can we automateSiebel testing to a large extent to reduce the test-ing effort and cost and enable test case reuse forfuture projects?”

The selection of suitable areas of automation,given the various candidate areas from a typicalSiebel implementation, is of paramount impor-tance in order to optimize the automation scriptdevelopment exercise. It is the objective of thiswhite paper to identify suitable areas of automa-tion and provide guidelines on how the automat-ed testing should be approached.

Relative Ease of Automation of SiebelTest Areas

There are four broad testing areas for a Siebelapplication: Functional testing, analytics testing,data testing and interface testing. Careful consid-eration of each of the aforementioned areasleads to the following observations regardingautomated test case suitability:

Functional Testing

This is carried out to validate the business appli-cation components of the system. Functionaltesting is conducted on units, modules and busi-ness processes in order to verify that the applica-tion is functioning as requested. This coversareas such as validating whether a module is per-forming correctly (e.g., activities are successfullycreated), Siebel workflows and assignment rulesare working properly, and UI layout (i.e., look andfeel, drop-down menus and test fields) is properlyvalidated. Functional testing offers the maximumpotential for automation of test cases. The fol-lowing are a few of the areas where automationtesting may be explored:

• Field validation in multi-value groups and pickapplets

• List of Values (LOV) validation

• Modularizing test steps that repeat across mul-tiple test cases (e.g., logon, screen navigation)

• Validation of multi-lingual label translations

white paper | april 2011

Siebel Test Automation

• White Paper

Data Migration Testing

Data testing is another appropriate candidate fortest automation. Loading of data from externalsources to a staging table, staging to EIM and EIMto database servers, can be tested for dataintegrity through the migration process. Afterloading the data in OLTP database, SQL queriescan be used to check whether all the valid data isloaded properly in the database. For a largeSiebel implementation where multiple instancesof the same application are being rolled out, thiswould entail a huge effort. In order to test migra-tion of data efficiently, and also with reducedeffort, automation testing can be introduced.Modern day test automation tools provides sup-port for directly querying databases withoutaccessing the GUI, which facilitates automatedtesting of data.

Interface Testing

This is typically not a good candidate for automa-tion. Testing of the interface of a Siebel applica-tion with a third-party application can be fullyautomated only when the third-party applicationfully supports test automation. If otherwise,inbound/outbound data passing through theSiebel system interface needs to be generatedand verified using driver and stub routines,resulting in extensive effort in certain situations.

Analytics Testing

This is comprised of such things as report layouttesting (e.g., checking the format and overall

appearance of the report) and report data testing(e.g., validation of data in the back-end andwhether it is being represented correctly in thereport). Analytics testing doesn’t qualify forautomation because activities such as formatverification are difficult and cumbersome toimplement through automation scripts.Moreover, the fact that data validation wouldinvolve setting up and executing SQL querieswhich often results in run-time changes and opti-mization, which renders the scripts to a greatextent as unsuitable for reuse, thereby defeatingthe purpose of automation.

Siebel Test Automation Approach

Figure 1 below highlights a suggested approachfor automating testing of Siebel applications.This approach is based on the experience of theauthors of this paper while working on multipleSiebel automation projects across domains suchas pharmaceuticals and insurance.

Identification of Test Case Candidatesfor Automation

When considering test automation of a Siebelapplication, keep in mind that it is never possibleto achieve 100% automation coverage. Unlesscare is taken while choosing test cases which aresuitable candidates for automation, it may resultin loss of productivity. Dedicating time to auto-mate test cases which will be executed for fewand far between instances cannot justify thedevelopment effort required. For example, in a

2white paper

TEST CASESS

GOOD CANDIDATEFOR AUTOMATION?

AUTOMATIONFRAMEWORK

EXISTS?

CAN A FRAMEWORK BE DEVELOPED ON

BUDGET?

ADD TESTSCENARIOS?

DEVELOP CUSTOM

FRAMEWORK

EXECUTE

EXECUTEMANUALLY

REPORT RESULTS

NO

NO

NO

YES

YES

YES

Siebel Test Automation Approach

Figure 1

multi-country Siebel application rollout, automa-tion of test cases involving country-specific local-ization requirements, which may be executedonly once or twice in the entire testing phase,should be avoided. It is therefore imperative todevelop selection criteria while considering testcases candidates for automation.

Developing the Automation Framework

At the very abstract level, an automation frame-work is comprised of a set of standards, guide-lines and utilities which are utilized for any testautomation project. If a framework needs to bedeveloped, a decision needs to be made on whattype of framework should be used. For example,developing a keyword-driven framework takes alot more time and technical expertise than devel-oping a data-driven framework. On the otherhand, creating new test cases using a keyword-driven framework is much faster and easier. It iscritical therefore to understand the tradeoff anddecide accordingly. Regardless of the type used,the framework should have support for most ofthe common tasks that need to be performedeasily.

Apart from the built-in functionalities, the frame-work should also clearly define the coding stan-dards, the script review and documentationprocess to be followed across projects. For anyserious automation project, the life time of theautomation suite is much longer than the dura-tion of any specific automation engineer’sinvolvement with the project. Adherence to prop-er coding standards and adequate documenta-tion of standards reduces the learning curve forany new automation engineer joining the team.

Figure 2 highlights the automation frameworkthat was used as part of a major project involving

testing of a Siebel SFA application which wasimplemented across twenty three countries inEurope for a Fortune 500 pharmaceuticals major.

Estimation of Effort

Effort estimation for an automation project is avast topic in itself and is beyond the scope of thispaper. However, there are three critical compo-nents that need to be carefully considered forany automation exercise: effort estimation of thetime required for developing the automationframework; effort estimation of the timerequired for developing and doing dry runs on theautomation scripts; and, effort required for scriptmaintenance.

Configuring Siebel Environment for TestAutomation

Automation support is not enabled in a Siebelenvironment by default. Before starting anautomation exercise, it needs to be confirmedwhether a License key is available for enablingautomation. This license key, which is a 24 to 30digit number, needs to be procured from Oracleand should be added to the Siebel server.

Adding Test Scenarios/ Re-usable actions/Function Libraries

Once the framework is ready and all configura-tions are in place, specific test scenarios can beadded. These test scenarios leverage the avail-able support from the framework to quickly auto-mate specific test steps. Generic scenarios for aspecific flavor of Siebel fall under thiscategory.(e.g., a medical representative creatingan unplanned call with a doctor in SiebelePharma application or an insurance agent creat-ing a new quote for an individual client in theSiebel eFinance application.) Multiple scenarios

white paper 3

STORAGE LAYER EXECUTION LAYER

GUI MAPS

DATA

FUNCTIONLIBRARIES

RECOVERYSCENARIOS

CONFIG.TOOLS

REUS

ABLE

ACT

IONS

DRIV

ER S

CRIP

TS

SETUP UTILITIESRESULT REPORTING (HTML & XLS FORMAT)

TEST EXECUTION ENVIRONMENT

AUT

VIRTUALSERVER

TEST AUTOMATION

TOOL

Automation Framework Developed for a Large Pharmaceuticals Client

Figure 2

are then combined to create application specifictest cases.

Automation Execution

The biggest challenge faced by automation teamswhile executing automated test cases in a Siebelenvironment is the hardware resource requirementof the Siebel application as well as the automationtools. It is suggested to have dedicated computers/ virtual servers with a sufficient amount of memo-ry available for execution of automated test cases.For example, in the case of a multi-country Siebeldeployment, country-specific setups needs to bemaintained in different machines, as having multi-ple language packs installed on same machine mayhave unforeseen impact on application behavior(i.e., submitting data on a form applet may result inunexpected error message, hence causing the testto fail). When multiple machines are not availabledue to budget constraints, care should be taken tohave the different language packs installed in dif-ferent folders.

Reporting

Reporting of results of automated test runs needsto be carefully planned. The same scripts may runmultiple times in different cycles or for multiplecountries. Hence, results need to be mapped cor-rectly with the appropriate test run. While this maysound trivial, lack of care in designing the reportingstructure may cause loss of test results in the longrun. A sample reporting format is shown in theFigure 3:

Functional Testing: Criteria forIdentification of Test Cases forAutomation

Having explored the suitability of automation test-ing in various Siebel areas it is evident that func-

tional testing of the application is one area whereconsiderable automation may be achieved.However careful consideration has to be madewhen selecting candidate test cases for automa-tion. The following guidelines will assist in selectionof suitable candidate test cases for Automation:

Number of Test Cycles and Potential Re-usabilityof the Test Case

This is the single most important factor in choosinga test case for automation. If a test case needs tobe executed once or twice, the development effortrequired for automating the test case may exceedthe effort required to execute the test case manu-ally. A useful rule of thumb to consider is that a testcase needs to be executed at least thrice during itslifetime to consider it a suitable candidate forautomation. From a Siebel perspective it thereforemakes sense in automating test areas such asScreen/View/LOV validation, logon/password syncand also test cases where basic opportunity/activi-ties are created by filling in certain mandatoryinformation.

Criticality of the Test Case

The test case should exercise a critical functionali-ty in terms of either technical complexity or busi-ness importance. High-priority test cases should bechosen for automation first, while lower priorityones should be automated if time permits.

Complexity of the Script Development Effort

Even if a test case is exercising a very critical mod-ule/functionality of the application, it may turn outthat automating the test case requires more effortthan is needed for the project. For example, withthe Siebel ePharma application, creation of a meet-ing by using the drag-n-drop to Siebel Calendarobject is a common functionality that is used veryfrequently by the users. But incorporating the sup-port for automating this functionality may requireconsiderable effort and not worth the time.

Test Data Type

Input Test Data Type: The type and format of testdata required to automate a test case is an importantfactor to consider. To have a form filled automati-cally, it may be needed to create random characterstrings to be entered to the text entry fields andspecific dates in predefined format (DD/MM/YYYYor MM/DD/YY) may need to be input in requiredfields.

Output Data Type

The test case may need to extract/compute somespecific data from application which may berequired for the execution of a subsequent testcase. Commonly three different methods are used

40white paper

04. Results

UAT Results

MS Testing

Austria

UAT

UAT_Accounts_Driver

Acct_Act_01 [UAT_Acct_

Acct_Act_02 [UAT_Acct_

Acct_Act_03 [UAT_Acct_

Acct_Act_04 [UAT_Acct_

Acct_Act_05 [UAT_Acct_

UAT_Activities_Driver

01. Execution Log

Figure 3

Reporting Structure for a LargePharmaceuticals Client

for storing such data. Each method has theiradvantages and disadvantages.

1. Storing data in external file - This is the mostcommonly used method. Output data is comput-ed and stored in text (.txt) or spreadsheet (.xls)format in some shared location. While using thismethod, care should be taken that the files arenot stored in the local drive of a system. If theoutput data is stored in local drives and the sub-sequent test cases are scheduled to be executedon a different machine, these machines will nothave access to the local drive of anothermachine. Also, for security reasons write accessmay not be provided in the test machines wherethe automation is executed, and the tests will fail.

2. Passing data as output parameter – A testmay pass data as output parameter to anothertest and the second test may receive those dataas input parameter. This method is fastest as thedata is readily available in memory and is notfetched from secondary storage. But to success-fully use this method, all tests that require inter-change of data need to be executed in the samesession without a break. Once a session ends, thedata is lost. This also prohibits partial executionof subsequent test cases, which may be requiredfor retesting of test cases.

3. Passing data as runtime data table – Thismethod is technically similar to method onedescribed above, but the special data format isprovided by some leading automation tool ven-dors. This feature may not be present in allautomation tools. Runtime data tables are usedwhen the same test case is executed against amultiple set of data and the output needs to betracked for each set.

Automation Testing in a Multi-Country /Multi-Lingual Deployment Scenario

Large Siebel deployment projects often involvemulti-country roll-outs wherein the applicationhas to be rolled out in multiple languages. Suchrollouts are also characterized by a common setof functionality applicable across all countries(core requirements) in addition to certain uniquecountry specific requirements (localizationrequirements). In such a multi-country deploy-ment scenario the following salient points needto be considered:

Prioritization of Test Cases for Execution

It is advisable to initially do an assessment on thecriticality of the requirements for each country.Post this exercise, a prioritization of the testcases for automation should be conducted foreach country.

Changes in Automation Approach

Normally, automation tools identify test objectsused in the application by means of the text dis-played on the GUI of the object. For example, abutton with caption “Submit” will be recognizedby the text “Submit” displayed on it. When theapplication is deployed in multiple languages, thisbecomes a serious problem for the reusability ofthe scripts across different languages. This diffi-culty is easily overcome in Siebel applications, asall the objects used in a Siebel application have aunique repository ID which can be utilized touniquely identify the object. This ID remainssame even when the GUI text of the objectchanges for different languages. The frameworkdesigner must enforce the use of repository ID asthe object identifier in all automation scripts ifthey are intended to be used for multiple lan-guages.

Reporting structure of the automation frame-work should also be amended to store the coun-try-specific results separately. This adds onelayer of complexity on the reporting structure ofthe automation framework, but offers long-termbenefits by providing a clear picture of the testexecution result for individual countries.

Language Validation Approach

Availability of comprehensive reference data is apre-requisite to accurate translation validation.The reference data should be arranged in a tabu-lar form, as indicated in Figure 5.

From the point of view of any modern testautomation tool, any text displayed in a Siebelapplication is a run-time property of an object ofthe application. For example, consider the scenarioof validating the translation of the caption of abutton. Let’s assume that in the application undertest, a “Submit” button is displayed in“Customers” applet of “Customers” view. Thecaption of the button will read “Submit” inEnglish, “Einreichen” in German and “Soumettre”in French version of the same application. To testthis, first a table is created as shown in Figure 4.The table depicts complete details of the navigationpath to the button (i.e. which screen, which viewand which applet the button belongs to). Alongwith the navigation information, the correcttranslations are also stored in the table. Theautomation tool first navigates to the relevantarea of the application and then validates the dis-played text using this reference table. Using thisapproach, validation of translation can be done ina very generic way so that the same script can bereused for multiple applications.

white paper 5

Stress on Re-usability

As long as repository ID is used as the identifying property of the Siebel objects used in the automationscripts, the same scripts can be used for all countries. Care should be taken for situations where a par-ticular Siebel object is present for one country but is absent for another country due to localizationrequirements. This type of scenarios must be handled individually for each country.

About Cognizant

Cognizant (Nasdaq: CTSH) is a leading provider of information technology, consulting, and businessprocess outsourcing services, dedicated to helping the world's leading companies build stronger busi-nesses. Headquartered in Teaneck, New Jersey (U.S.), Cognizant combines a passion for client satisfaction,technology innovation, deep industry and business process expertise, and a global, collaborative workforcethat embodies the future of work. With over 50 delivery centers worldwide and approximately 104,000employees as of December 31, 2010, Cognizant is a member of the NASDAQ-100, the S&P 500, theForbes Global 2000, and the Fortune 1000 and is ranked among the top performing and fastest growingcompanies in the world.

Visit us online at www.cognizant.com for more information.

World Headquarters

500 Frank W. Burr Blvd.Teaneck, NJ 07666 USAPhone: +1 201 801 0233Fax: +1 201 801 0243Toll Free: +1 888 937 3277Email: [email protected]

© Copyright 2011, Cognizant. All rights reserved. No part of this document may be reproduced, stored in a retrieval system, transmitted in any form or by anymeans, electronic, mechanical, photocopying, recording, or otherwise, without the express written permission from Cognizant. The information contained herein issubject to change without notice. All other trademarks mentioned herein are the property of their respective owners.

India Operations Headquarters

#5/535, Old Mahabalipuram RoadOkkiyam Pettai, ThoraipakkamChennai, 600 096 IndiaPhone: +91 (0) 44 4209 6000Fax: +91 (0) 44 4209 6060Email: [email protected]

European Headquarters

Haymarket House28-29 HaymarketLondon SW1Y 4SP UKPhone: +44 (0) 20 7321 4888Fax: +44 (0) 20 7321 4890Email: [email protected]

Screen Repository Name View Repository Name Applet Repository Name Field Repository Name English German French

Customer Detail Screen Customers View Customers Submit Button Submit Einreichen Soumettre

Account Screen Account View Accounts New Visit New Visit Besuchen Sie New Nouvelle visite

Conclusion

An automation test strategy should be developed based on the objective of your automation exercise(e.g., reduce testing cost, increase test coverage etc). If an automation testing exercise is to be attemptedfor a Siebel test automation project, it is advisable to start with suitable test candidates from a func-tional and data testing perspective and then move to analytics or interface testing areas. Adequate careshould also be taken when selecting test candidates for automation within these areas and decisionsshould be driven by repeatability, relative ease of automation and criticality of the test case.

About the Authors

Parantap Samajdar is a CRM Testing consultant with more than six years of experience in test automationand performance testing. He holds a Bachelor of Engineering Degree in Computer Science. Parantap canbe reached at [email protected].

Indranil Mitra is a Principal Consultant and has extensive experience in test analysis and managementgained from multiple CRM testing projects. He has more than 10 years of experience and holds aBachelor of Engineering Degree in Instrumentation and Electronics. Indranil can be reached [email protected].

Ankur Banerjee works within Cognizant’s CSP Testing Pre-Sales and Delivery team. He has more than 10years of experience across various domains and technologies. Most recently he handled test managementof a large Siebel upgrade project for a major insurance client. Ankur holds a MBA (Systems) and aBachelor of Engineering degree in Civil Engineering. Ankur can be reached at [email protected].

Table for Translation Testing

Figure 4