automating functional testing

Upload: austinfru

Post on 13-Apr-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/27/2019 Automating Functional Testing

    1/11

    11

    Automating Functional Testingusing Business Process Flows

    By Deepali Kholkar, Nikhil Goenka and Praveen Gupta

    Process models promise easy and effectivefunctional testing

    F

    unctional testing of large complex business

    applications poses multiple challenges.

    While the test suite needs to be comprehensive,

    test design and execution cycles are constrained

    by time, effort and resource costs. An effective

    functional test process is one that achieves a

    trade-off between these two goals without

    compromising on test quality. However,

    there is no systematic method available

    that helps test design teams come up with

    a comprehensive functional test suite or for

    selection of tests to achieve an efficient yeteffective functional test.

    Systems are buil t to support an

    organizations business processes. Functional

    testing is required to be done at several stages

    during the development and maintenance life-

    cycle, viz., system testing, integration testing,

    user acceptance testing, to evaluate the systems

    compliance with its functional requirements.

    It should test the core functionality of the

    system and cover the various usage scenarios

    comprehensively.

    This requires testing system behavior

    under specific conditions that may occur in

    its life cycle. However, one may not want

    to run the entire test suite repeatedly but

    instead apply selection using a combination

    of conditions.

    Since there is no fixed method for

    building the content of a functional test suite,

    individual project teams dene their own test

    methodology and test cases. The quality and

    coverage of these test suites is completely

    dependent upon the availability of expertiseand restricted by time.

    The onus of building a test suite that

    achieves functional coverage including special

    cases and of optimizing the tests is completely on

    the test design team who currently do not have

    any tools to support this. Commercially available

    tools offer test management and an automation

    platform but do not help generating test content.

    Tests are conventionally captured in

    document form, which become difcult to go

    through and maintain for large or complex

    SETLabs BriefingsVOL 9 NO 4

    2011

  • 7/27/2019 Automating Functional Testing

    2/11

    12

    systems. It is hard to get a handle on coverageand track which functionality has been covered

    under which set of conditions, from documents.

    The tests, their variations and selection need to

    be managed manually. There is no automated

    mechanism to verify test documents against the

    requirements.

    This paper proposes a method for

    test design of business applications based

    on modeling application functionality as

    business process flows annotated with

    business rules and test conditions and

    using the model for test generation and

    test selection. This gives a formal method

    to go about creating the test content from

    the functional requirements. The structured

    model form and visual notation make it

    easy to go through and build in the missing

    portions, enabling creation of a test suite that

    achieves good functional coverage.

    Exhaustive test coverage may not be

    desirable during every test cycle due to time and

    effort cost constraints, hence selection is provided

    on the model for test optimization. The model is

    veried using model checking. Sample test data and

    scripts to automate test execution are generated.

    Since this is a top-down approach

    to test design beginning with a high-level

    process model as opposed to having just use

    cases which are at the leaf level, it can helpdevelopers be cognizant of the various usage

    scenarios upfront, avoiding late detection of

    gaps and defects when end-to-end scenario

    testing is done for the first time during the

    system test.

    The approach offers some independence

    from team conguration changes over time,

    since a lot of the necessary domain knowledge

    and test specications are captured in model

    form. It offers the possibility of reuse across

    applications.

    RELATED WORKTest generation from dynamic models of

    various kinds has been a topic of research for

    quite some time. Existing work deals with

    generation of test cases from state transition

    diagrams, state charts, collaboration diagrams,

    sequence diagrams and activity diagrams. Most

    of it is intended for unit or integration testing.

    The approach described in this paper

    is different in using business process models

    that depict functionality without going into

    implementation detail and in providing a

    seamless, end-to-end approach from test design

    to automation, usable by business analysts

    and test teams. The process or workflow

    notation maps easily to the functionality of

    most business systems than state transition

    diagrams or state charts. The collaboration and

    sequence diagrams in many of the approaches

    are specied at a design or implementation

    level, whereas test teams do not usually have

    access to the system design or implementation.

    Constraint solving is also not a new

    concept and has been used for generation of test

    data earlier. Our approach differs in applying

    constraint solving to verify the pre- and post-

    condition sequences in each path of the process

    model to identify redundant paths and errors

    and also to generate test data for each step of

    the scenario.Abdurazik and Offutt describe a

    technique for generating tests from collaboration

    diagrams and utilizing them for checking actual

    system execution by instrumentation of the code

    [1]. The approach is targeted towards unit or

    integration testing.

    Hartmann et. al., describe test scenario

    generation and automation from activity

    diagrams [2]. Pre and post conditions are

    not processed for either data generation or

    verication purposes. Transition coverage is

  • 7/27/2019 Automating Functional Testing

    3/11

    13

    targeted as opposed to branch coverage in ourcase. Guard conditions are dened for additional

    selection and can be manually mapped to

    data values. Business rule specication is not

    supported.

    Briand and Labiche describe an activity

    diagram based approach to test generation

    where each activity is further detailed as a

    sequence diagram with Object Constraint

    Language (OCL) constraints [3]. Their approach

    addresses coverage of all possible test scenarios

    by interweaving of activities. This however

    leads to a very large number of combinations.

    Generation is not automated. The approach

    does not attempt to solve constraints for

    verication or data generation. Test automation

    is not dealt with in this work.

    Frhlich and Link describe transformation

    of use case documents into state charts and

    generation of test cases from these [4]. A state

    chart is at the level of a use case a higher

    level process context is not available. Task

    pre-conditions and actions are encoded and

    posed as an articial intelligence problem to

    find test sequences. Transition coverage is

    ensured, which is equivalent to task coverage

    in process model parlance. Formal specication

    of conditions, verication, data generation and

    test automation are not covered. Cavarra et. al.,

    describe an approach to test generation based onstate transition diagrams [5]. Test automation or

    data generation is not addressed here.

    Bertolino and Gnesi discuss a method

    to generate test cases for product lines using

    the category-partition method by annotation

    of textual use-case specification, hence test

    generation is not automated [6].

    MODELING SYSTEM FUNCTIONALITY

    This approach proposes capturing system

    functionality as a set of business process

    ows, annotated with business rules and testconditions. If a process model has already

    been created in the requirements phase, it can

    be reused. The functional test suite must come

    from the functional requirements of the system,

    by denition.

    Business process ows have been chosen

    since it is intuitive to think of functional testing

    as testing the business process ows of the

    system.

    Business Process Modeling Notation

    (BPMN) is used as the notation for capturing

    the process ows [7]. It is gaining in popularity

    as a notation to express requirements and

    is understood by all stakeholders, be they

    requirement analysts, software developers,

    managers or end-users [8]. This makes it

    possible for test designers with knowledge of

    just the domain or system functionali ty as a

    black box, to specify the model. Knowledge of

    internal details of system implementation is

    not required.

    The rst step is to draw the high-level

    business process flows of the system. Each

    step in the process ow can be an atomic task

    or a sub-process that is further exploded as a

    separate process ow. Important branch ows

    in the processes that need to be covered in the

    tests should be drawn, along with the branching

    conditions that are usually the validation ofsome business rule. Conditions are annotated

    on the process ows using condition names as

    labels. Condition names should be readable and

    in domain terminology.

    If data generation is desired, input

    and output parameters of tasks need to

    be modeled, whose data type can be a

    simple data type or any entity from the data

    model. The data model is required only for

    modeling parameters, business rules and

    later, for verification of the process model.

  • 7/27/2019 Automating Functional Testing

    4/11

    14

    DATA MODEL

    The data model for the application is captured

    using the UML class diagram notation. It

    captures the various entities, their attributes and

    associations. Only entities from the domain model

    need to be included, not the classes that are created

    during system design and implementation.

    The class model for part of an Insurance

    system is shown in Figure 1. Business Rules for

    the domain/ application are captured as rules

    on the entities. These are invariants or conditions

    that must hold in every state of the system. Theyare expressed as OCL constraints on the object

    model [9].

    A business analyst or test designer who is

    drawing the entity model need only specify the

    rules in terms of labels and description in domain

    terminology, e.g., PlanHolderMandatoryRule

    for stating that a Plan Holder is mandatory

    for every Plan; and attach them to an entity.

    The development team can later ll in the OCL

    constraint for each invariant since it requires

    familiarity with OCL syntax.

    BUSINESS PROCESS MODELLING NOTATION

    BPMN has been accepted as an OMG tandard.

    The complete BPMN is not required to be used

    by our method. A simple BPMN process ow

    diagram depicting the Process Payments batch

    process of the Insurance system is shown in

    Figure 2. The core BPMN is very similar to the

    owchart or UML Activity diagram notation.

    Every process begins with a Start node. The

    rounded rectangles are the tasks or activities

    being performed during the process.

    Each task could be an atomic task or asub-process. While there is no rule regarding

    granularity of the atomic tasks, a thumb rule

    that can be used is for each task to represent a

    user-interaction with the system or a use-case.

    For example, in the Process Payments process,

    the individual use-cases ProcessPayment

    and CheckForRejectPayments are tasks while

    ProcessRejectPayments is a sub-process.

    The diamond-shaped nodes are the gateways or

    decision-points in the process, where the ow

    branches or merges. The arrow connectors are

    Plan

    Plan Payment

    Method

    Payment ReferencePayment Reference

    TransactionDetailsForPlan

    Paym

    entMethodPlan

    Linked

    To

    Payment Reference Plan

    Return DDs Payref

    Payref Return DDs

    Plan Reject Payments

    Plan Reg With

    Has-A

    Plan Transaction

    Plan Payment

    Reg Wth Plan

    Reject Payments PlanReject Payments

    Reg With Instruction

    Premium History

    Plan Basic Transaction

    Figure 1: Entity Model for an Insurance System Source: Internal Research

  • 7/27/2019 Automating Functional Testing

    5/11

    15

    called sequence ows and depict the ow of

    control in the process. Each sequence ow can

    have one or more conditions associated with

    it, that enable the ow. We have used these

    conditions to express the Test conditions in

    the process ows. These are annotated on thesequence flows as pre- and post-conditions

    before and after the tasks respectively. These

    need not be just the pre- and post condition

    contracts for the task, but are the conditions

    that qualify the scenario or branch. Each branch

    ends in an End node.

    The diagram shows the various scenarios

    that can happen during Process Payments, as

    branch ows are annotated with the branching

    conditions as labels. Most branching conditions

    are validations of some business rule as indicated

    here. Initially, the condition Rule0002 process

    Payments indicates application of Rule0002

    which gives the criteria for fetching payments to

    be processed. After Task ProcessPayment, the left

    branch is the scenario in which no payments to be

    processed were found, indicated by the conditionlabel NotFound_Rule0002Payments . The other

    branch shows the condition Rule0003Payments,

    which looks for Rejected Payments. If found,

    it calls sub-process ProcessRejectPayments

    and ends by updating the associated Payment

    Reference details.

    TEST GENERATION

    Models being formal, lend themselves to

    automated analysis and verification. The

    process model is traversed for generation of

    Figure 2: BPMN flow diagram for Process Payments process

    Process Payment

    Check For Reject Payments

    Rule 00002 Process Payments

    Not Found_Rule 00002 Payments

    Not Found_Rule 00003 Reject Payments Reject Payments

    Process Reject Payments

    Not Found_Payment Reference To Update Rule 00080 Payment Reference To Update

    Rule 00003 Payments

    Update Payment Reference

    Payment Reference Index Updated

    Source: Internal Research

  • 7/27/2019 Automating Functional Testing

    6/11

    16

    test scenarios. Each scenario specication isveried. The pre- and post-conditions are used

    for generation of test data. Test selection is

    provided on the model, in order to generate a

    pruned test suite.

    Scenario Generation

    Branch coverage is used as the criteria for

    generation, hence all paths in a process ow are

    covered using a depth-rst search. Each path or

    branch becomes one test scenario for the system.

    Each scenario is a sequence of task

    tuples, each task tuple consisting of the task

    and its specified pre-and post conditions.

    Test suite T = {S1,S2.Sn}

    where S1,S2.Sn are the scenarios.

    Si = {t1,t2.tn}

    where t1,t2.tk are the Task tuples

    in scenario Si

    tj = {prej, Tj, postj}

    where prej, Tj, postj are the pre-condition,

    task body and post-condition in task tuple tj.

    The process flow may involve loops.

    Every loop is unrolled once by the scenario

    generator algorithm. All scenarios thus

    generated, along with their test data are output

    as a test plan document.

    Verifcation

    Model checking has been used to verify the

    process specication against the business rules,

    detect inconsistencies in the process ow itself

    and eliminate infeasible paths, which gives a

    great deal of optimization in the number of

    scenarios generated. The open-source model

    checker Symbolic Analysis Laboratory (SAL)

    [10] has been used for this purpose. For

    verication and test data generation satisfying

    the conditions mentioned in the process ow,

    the conditions need to be dened using OCLexpressions. Each scenario path is converted

    into a specification for SAL where the pre-

    and post-conditions are translated into logic

    expressions. The applicable business rules and

    cardinality invariants from the class diagram

    are also translated into constraints.

    Changes effected by each task can be

    provided very simply in terms of conditions

    using OCL and actions for which there is a

    simple syntax. Actions can be creation/ deletion

    of objects/ associations or setting of attribute

    values. This needs to be done only wherever

    required, i.e., if later in the ow, the created/

    deleted objects are referred.

    Post-state of task = Pre-state of task +

    changes effected by task

    The model checker ver i f ies the

    specication for consistency. The pre-condition,

    task specication and post-condition need to be

    consistent with one another at each task level,

    and also amongst the entire sequence of tasks

    that make up the scenario for the model checker

    to be able to nd a path. Business rules are

    veried amongst themselves for consistency.

    Pre- and post-conditions in a path are also

    veried against the business rules to ensure that

    there are no conicts between them.

    When SAL fails to nd a solution for

    any specication, there is a conict. The causecould be an error in the specication or that

    the conditions within the scenario do conict

    i.e., the path is invalid and can be eliminated.

    In the selected case study, 215 scenarios were

    generated overall, of which only 130 were valid

    and 85 were eliminated as infeasible scenarios.

    The error in the specication could be either

    a business rule being violated by a condition/

    action or the conditions in the scenario conicting

    among themselves. Errors should be rectied and

    solutions generated for all valid scenarios.

  • 7/27/2019 Automating Functional Testing

    7/11

    17

    Test Data GenerationWhen the constraint solver nds a solution to

    the scenario specication, it nds a sequence of

    states that satisfy the pre- and post-conditions of

    each Task in the sequence. Each state contains

    values for the various dened data objects and

    attributes. The generated data values in each

    state conform to the pre-conditions as well as

    the business rules. A pre-state is used as the

    source of input test data for the task following

    it. The initial state or scenario pre-state gives

    sample pre-requisite data that must exist in

    the system database for the scenario to begin

    execution.

    The input data generated for each task

    needs to not only satisfy the pre-conditions for

    that task but also of subsequent tasks in the ow,

    so that the system is able to execute the entire

    scenario. Also, it should be cognizant of changes

    brought about by the preceding task(s), i.e., it must

    come from the post-state of the previous task.

    These criteria are naturally met by the constraint

    solving approach, whereas in the conventional

    approach, they have to be ensured manually. This,

    coupled with the need to create the scenario

    pre-state data makes manual test data creation

    time-consuming and complex.

    Test Automation

    The scenarios are automated by generating testscripts for them with the generated test data

    embedded. The automation has been done

    using the Test Drive tool [11] developed in our

    lab, which currently generates scripts onto the

    open-source web-based automation platform

    Selenium [12].

    Test Drive has a recorder which helps

    record the sequence of UI actions required to

    perform each atomic task in the above process,

    just like any capture-replay tool. For each

    generated scenario, the recorded sequences

    for individual tasks are threaded together,the generated input data is inserted and the

    test script is created. These test scripts can

    be replayed on any browser using just the

    Selenium runtime engine.

    Test Selection

    The generated test cases are exhaustive and

    the number can become really large due to

    combinatorial explosion when the process

    hierarchy is large and complex.

    In such a case, running all tests is neither

    feasible nor desirable. The test suite can be

    pruned by using preliminary heuristics such

    as, certain conditions are not required to be

    checked for all possible paths. For example, an

    error condition may need to be checked only

    once. Selection on conditions is provided, where

    conditions to be tested in only one or a specied

    number of paths can be stated. This results

    in a drastic reduction in number of test cases.

    Support for selection of specied conditions

    and tasks and selection using compound

    conditions is being added. Although a very

    simple concept, it allows the test team to specify

    the functionality they would like to focus on in

    a particular test cycle.

    TOOL IMPLEMENTATION

    The described approach has been implementedusing tools available within the lab and open-

    source tools. Figure 3 shows a schematic

    diagram of the tool/ approach. The Requirement

    model process ows and class diagram, was

    created using metamodelers, a modeling

    tool developed within our lab that supports

    modeling using BPMN and UML. The path

    generator was developed using OMGen,

    which is Metamodelers scripting language. It

    traverses the process ow models to generate

    the scenario paths.

  • 7/27/2019 Automating Functional Testing

    8/11

    18

    The Model SAL translator translates

    each scenario path into a SAL script. The SAL

    model checker veries and runs each script to

    generate data. The test case generator picks

    up the scenario paths, generated data andthe individual recorded use-cases to generate

    automated test scripts. It also takes as input,

    a simple mapping between the attributes in

    the domain model and the input elds on the

    screen.

    INSURANCE APPLICATION: CASE STUDY

    The test generation approach has been

    successfully tried out on processes from three

    real life applications. The results are presented

    here. To illustrate the application of the tool,

    one of the case studies done on an Insurancesystem, is described in detail.

    Two online processes and one batch

    process were modeled. The batch process

    had a long and complex control flow, with

    a requirement document spanning over

    a hundred pages. The test team had been

    unsuccessfully trying to create test cases

    manually for it.

    The modeling approach helped the

    team obtain tests for the batch process with

    relative ease. They were able to easily draw

    a process model for the complex description;

    modeling in fact reduced the complexity of the

    test case preparation task, helping visualize the

    process flow and its branches. The team could

    quickly generate tests which they were unable

    to construct manually. Also, they were not

    domain experts, but a relatively inexperienced

    team.

    Another important observation was

    that even without the knowledge of this

    approach, the test case writer draws a kind

    of flow graph to clear their understanding of

    the process.

    One drawback from their perspective

    was having to draw an extra diagram. The

    two advantages were considerable time and

    effort saved. The process diagram also became

    an artifact that was used by the team later forreference and discussions on coverage.

    The Insurance Model

    The class model in Figure 1 shows the Product

    entity that denotes an insurance product, Plan

    denoting a policy that is issued against the

    Product to customers and many associated

    entities storing different data about the Plan.

    The customer, beneficiaries, payees etc. are

    each modeled as a Relation of the appropriate

    type.

    Scenario Path Files

    Path generator

    Model ---> SAL

    Translator

    Scenario

    Test Scripts

    Generated

    Data

    Requirement Model:

    Process flows + Class model

    Mapping

    FileTest Case

    Generator

    Test Drive

    Recorded Use-Cases

    SAL Scripts

    Figure 3: Tool SchematicSource: Internal Research

  • 7/27/2019 Automating Functional Testing

    9/11

    19

    The Process Payments batch processshown in Figure 2 looks for payments returned

    by the bank due to various reasons. After

    processing the records, the batch updates

    the plan by reversing the premium payment

    and generates letters informing the client.

    The system tries to match the returned

    premium payment against a specic item on

    the plans premium payment history in order

    to reverse it. The main processing happens in

    the ProcessRejectPayments sub-process. The

    complexity was due to the alterations of various

    attributes based on the 150 odd rules present for

    this particular batch process.

    RESULTS

    Data from the test generation exercise is tabulated

    in Table 1. The average time taken by the team

    to manually create tests was one week per

    process. In this respect, our approach compares

    favourably. The tool was found to scale up and

    handle the large number of rules and flows

    generated. It was also successful in pruning the

    test suite to a very compact size. 1200 scenarios

    were originally generated. After specifying ten

    conditions that were to be tested only for a single

    ow, the number came down to 215. These were

    mostly error and exception conditions.

    On verification, 85 infeasible paths

    where conflicting conditions occurred wereidentied. These were all found to be redundant

    paths where for example, if the flow had a

    branch based on variable value=A, B or C, there

    was later in the ow another branch based on

    a check for the same variable value=A, B or

    D. This resulted in several redundant paths

    being created with conicting values chosen

    for the same variable, which were identied

    by the constraint solver as unsatisable. The

    nal number of scenarios was thus reduced to

    130. The scenarios were veried by the team

    as being a satisfactory set of tests. The metrics

    above clearly bring out the considerable time

    reduction in test design, but not the total

    productivity in test automation. Automation

    was not carried out since the implementation

    was not web-based and our tool currently

    supports automation only for web-based

    applications. The batch process had not been

    implemented. Hence no data about defects

    or execution time and effort are available.

    However, the generated tests provided 100%

    task and condition coverage and also covered

    most of the scenarios.

    The test generation and test automation

    components have been separately tried on

    several real-life projects. The automationcomponent is being extensively used in a few.

    The end-to-end approach has been tried on

    several case studies within the lab.

    CONCLUSION

    The approach aims to provide functional

    test teams with a model based method and

    framework to design a functional test suite

    that gives an assurance on test suite quality

    and coverage. The fact that this approach can

    be used by business analysts to generate and

    Process Validate NewApplication

    CapturePayment

    ProcessPayments

    Batch

    Time to

    understand

    functionality3 days 1 day 5 days

    Number

    of Rules100+ 25 150

    Time taken

    to model1 day 2 days 2 days

    Number

    of Tests

    generated

    7 42

    Total: 1200

    Selected: 215

    Feasible: 130

    Table 1: Results from Insurance Case StudySource: Internal Research

  • 7/27/2019 Automating Functional Testing

    10/11

    20

    automate tests is its forte. The model and visualnotation capture not just a test specication but

    a lot of details about the system and becomes

    a valuable artifact in itself, besides being used

    to nd errors in the specication and drive

    selection for achieving optimal test cycles.

    It has been tested on case studies drawn

    from real projects and was found to scale up. It

    was also interesting to nd that one need not use

    the entire process right up to test automation

    and data generation to make an impact. Just

    modeling and verication was found in the

    mentioned case study, to provide enormous

    benets in terms of ease of specication and

    test generation when the team was unable to

    create them manually.

    REFERENCES

    1. Abdurazik, A. and Offutt J., (2000),

    Using UML Collaboration Diagrams for

    Static Checking and Test Generation, in

    the Proceedings of Third International

    Conference on the UML, pp. 385395.

    2. Hartmann, J., Vieira, M., Foster, H.

    and Ruder, A. (2005), A UML-based

    Approach to System Testing, Innovations

    in Systems and Software Engineering,

    Vol. 1, No. 1, pp.12-24.

    3. Briand, L.C. and Labiche,Y. (2001), A

    UML-based Approach to System Testing.Software System Model, Vol.1, No. 1,

    pp. 10-42.

    4. F rhlich, P. and Link, J . (2000),

    Automated Test Case Generation from

    Dynamic Models, in Bertino E (ed.)Proceedings of the ECOOP, pp. 472491.

    5. Cavarr a, A., Davies , J., Jeron, T.,

    Mournier, L., Hartman, A. and Olvovsky

    S. (2002), Using UML for Automatic Test

    Generation, in the Proceedings of ISSTA.

    6. Bertolino, A. and Gnesi, S. (2003), Use

    Case-based Testing of Product Lines, in

    Proceedings of the ESEC/SIGSOFT FSE,

    pp. 355358.

    7. OMG BPMN 1.0 specication. Available

    at http://www.bpmn.org/Documents/

    OMG_Final_Adopted_BPMN_1-0_

    Spec_06-02-01.pdf.

    8. Buchanan, I. (2005), Requirements

    Elicitation with Business Process

    Modeling Notat ion. Available at

    http://conferences.embarcadero.com/

    article/33351.

    9. OMG Object Constraint Language

    spec i f i ca t ion V2 .0 . Avai lable a t

    http://www.omg.org/technology/

    documents/formal/ocl.htm.

    10. SAL Modelchecker. Available on http://

    sal.csl.sri.com

    11. Patel, S., Gupta, P. and Surve, P. (2010),

    Test Drive - A Cost Effective Way to

    Create and Maintain Test Scripts for Web

    Applications, in the Proceedings of 22nd

    International Conference on SoftwareEngineering , pp. 474-476.

    12. Selenium Open-source Web Application

    Testing Platform. Available on http://

    seleniumhq.org/.

  • 7/27/2019 Automating Functional Testing

    11/11

    For information on obtaining additional copies, reprinting or translating articles, and all other correspondence,

    please contact:

    Email: [email protected]

    Infosys Limited, 2011

    Infosys acknowledges the proprietary rights of the trademarks and product names of the other

    companies mentioned in this issue of SETLabs Briengs. The information provided in this document

    is intended for the sole use of the recipient and for educational purposes only. Infosys makes no

    express or implied warranties relating to the information contained in this document or to any

    derived results obtained by the recipient from the use of the information in the document. Infosys

    further does not guarantee the sequence, timeliness, accuracy or completeness of the information and

    will not be liable in any way to the recipient for any delays, inaccuracies, errors in, or omissions of,

    any of the information or in the transmission thereof, or for any damages arising there from. Opinions

    and forecasts constitute our judgment at the time of release and are subject to change without notice.

    This document does not contain information provided to us in condence by our clients.

    Authors Profiles

    DEEPALI KHOLKAR is a Scientist at Tata Research Design and Development Centre (TRDDC). Shehas several years of experience in model driven development and her current area of work is modelbased testing and formal specification and analysis. She can be reached at [email protected].

    NIKHIL GOENKA is a Systems Engineer at TRDDC. His research interests include model basedtesting and formal specification and analysis. He can be contacted at [email protected]

    PRAVEEN GUPTA is a Systems Engineer at TRDDC. His research interests include model basedtesting and formal specification and analysis. He can be contacted at [email protected].