evolving an elective software testing course: lessons learned edward l. jones florida a&m...

48
Evolving an Elective Software Evolving an Elective Software Testing Course: Lessons Learned Testing Course: Lessons Learned Edward L. Jones Florida A&M University Tallahassee, FL USA 3rd Workshop on Teaching Software Testing

Upload: deborah-barton

Post on 22-Dec-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

Evolving an Elective Software Testing Evolving an Elective Software Testing Course: Lessons LearnedCourse: Lessons Learned

Edward L. JonesFlorida A&M UniversityTallahassee, FL USA

3rd Workshop on Teaching Software Testing

Agenda

Course Overview

Student Background

Driving Principles

Overview of Assignments

Course Reflection

Improvements

Assignment Walkthroughs

Course Overview

DESCRIPTION: The purpose of this course is to build skills necessary to perform software testing at the function, class and application level. Students will be taught concepts of black-box (functional and boundary) and white-box (coverage-based) testing, and will apply these concepts to small programs and components (functions and classes). Students will also be taught evaluative techniques such as coverage and mutation testing (error seeding). This course introduces the software engineering discipline of software quality engineering and the legal and societal issues of software quality.

Programming Focus

AUDIENCE: Not software testers but software developers. What distinguishes the course approach is that is stresses the programming aspect of software testing. A goal is to enhance and expand students’ programming skills to support activities across the testing lifecycle: C++ programming and Unix shell script programming to automate aspects of software testing.

Students just needed a course to take ...

Conceptual Objectives

The student shall understand The software testing lifecycle The relationship between testing, V&V, SQA Theoretical/practical limits of software testing The SPRAE testing framework Concepts and techniques for black-/white-box

testing Test case design from behavioral model Design patterns for test automation Test coverage criteria Issues of software testing management

Performance Objectives

The student shall be able to: Use the Unix development environment

Write simple Unix shell scripts

Design functional and boundary test cases

Develop manual test scripts

Conduct tests and document results

Write test drivers to automate function, object and

application testing

Evaluate test session results; write problem reports

Learning/Evaluation Activities

80% practice / 20% concepts Lectures (no text) Laboratory assignments

Unix commands and tools

Testing tasks

Examinations 2 Online tests

Final (online)

Amnesty period (1 test / 2 labs)

Student Background

Reality: 20 students Not particularly interested in testing Low programming skill/experience

Ideal: An interest in software testing Strong programming skills Scientific method (observation, hypothesis

forming) Sophomore or junior standing Desire for internship in software testing

My Perspective on Teaching Testing

Testing is not just for testers! In ideal world, fewer testers required

Developers have tester’s skills/mentality Testing overlays development process

No silver bullet … just bricks Simple things provide leverage No one-size-fits-all

Be driven by a few sound principles

Driving Principles

Testing for Software Developers Duality of developer and tester

Few Basic Concepts Testing lifecycle

Philosophy / Attitudes (SPRAE)

Learn By Doing Different jobs across the lifecycle

A Testing Lifecycle

Analysis

Design

Implementation

Execution

Evaluation

Specification

Test Strategy/Plan

Test Script, Data, Driver

Defect DataProblem Reports

Test Results

Test Cases

Experience Objectives

Student gains experience at each lifecycle

stage

Student uses/enhances existing skills

Student applies different testing

competencies

Competencies distinguish novices from the

experienced

A Framework for Practicing Software Testing

• Specification the basis for testing

• Premeditation (forethought, techniques)

• Repeatability of test design, execution, and evaluation (equivalence v. replication)

• Accountability via testing artifacts

• Economy (efficacy) of human, time and computing resources

Key Test Practices

Practitioner -- performs defined test

Builder -- constructs test “machinery”

Designer -- designs test cases

Analyst -- sets test goals, strategy

Inspector -- verifies process/results

Environmentalist -- maintains test tools &

environment

Specialist -- performs test life cycle.

Test Products

Test Report (informal) of manual testing

Test Scripts for manual testing

Test Log (semi-formal)

Application Test Driver (Unix shell script)

Unit/Class Test Driver (C++ program)

Test Data Files

Test Results (automated)

Bug Fix Log (informal)

Specification Products

Narrative specification

Specification Diagrams

Specification Worksheet (pre/post conditions)

Decision Tables

Control Flow Graphs

Assignments Target Skills

Observation Skills Systematic exploration of software behavior

Specification Skills Describe expected or actual behavior

Programming Skills Coding for development of test machinery

Test Design Skills Derive test cases from specification using

technique Team Skills

Work with other testers

Course Reflection

Testing is programming intensive

Testing requires analytical skills and facility with

mathematical tools

Testing generates data management problem that

is amenable to automation

Testing gives students advantage in entry-level

positions

Students take this course too late

Failed Course Expectations

Students test at all levels No “in-the-large” application (e.g., web-based)

Students develop intuitive testing skills On largest project, concepts did not transfer 1 in 3 students show “knack” for testing

Impact of balance of concept and experience Poor performance on exams with problems like those

in labs Test case design skills low

Homework needed v. labs (programming) Mentoring (timely feedback) did not occur

Students left to own devices too much

Why These Outcomes?

Formalisms important, but difficult Provide the behavior model (e.g., decision table) Basis for systematic test case design, automation

Lack of textbook Students need concepts + lots of examples

Poor availability when students were working Students worked at last minute Not always around Automated grading lacked 1-1 feedback

Standards-rich/tool-poor environment a distraction Assigned work too simple??

Proposed Changes

Improve lecture notes and example bank Find and refine

Resources and workbook

Outside-in: testing in-the-large before in-the-small

Recitation/laboratory for discussion and feedback

Increase use of testing tools (no-cost)

Increase use of collection of code/applications

Examination testbank for practice, learning

Assignment Walkthroughs

(see paper)

Assignment Walkthroughs

Blind Testing

Test Documentation

Specification

Test Automation via Shell Scripts

Unit Test Automation (Driver)

White-Box Unit Testing

Class Testing

Blind Testing I

Objective: Explore behavior of software without

the benefit of a specification

Given: Executables + general description

Results: Students not systematic in exploration

or in generalizing observed behavior Hello output based on length of input

Add 1-digit modulus 10 adder, input exception

Pay pay calculation with upper bound pay amount

Blind Testing II

Programming Objective: Student writes program

that matches the observed behavior of Blind

Testing I

Test Objective: Observations on Blind Testing I

used as “test cases” for reverse-engineered

program.

Results: Students did not see the connection; Did not replicate the recorded behavior

Did not recognize (via testing) failure to replicate

SUPPLEMENTAL SLIDES

Student work

SCALING UPThe heart of the approach is to use a decision table as a thinking tool. The most critical task in this process is to identify all the stimuli and responses. When there are many logical combinations of stimuli, the decision table can become large, indicating that the unit is complex and hard to test.

IDENTIFYING BEHAVIORApproaches

• Work backwards» Identify each response» Identify conditions that provoke response» Identify separate stimuli

• Work forward» Identify stimuli» Identify how each stimulus influences what

unit does»Specify the response

IDENTIFYING STIMULI

• Arguments passed upon invocation• Interactive user inputs• Internal, secondary data

»global or class variables• External data (sources)

» file or database status variables» file or database data

• Exceptions

IT PAYS TO BE A GOOD STIMULUS DETECTIVE

• Failure to identify stimuli results in an incomplete, possibly misleading test case

• The search for stimuli exposes» interface assumptions -- a major source of

integration problems

» incomplete design of unit

» inadequate provision for exception handling

IDENTIFYING RESPONSES

• Arguments/Results passed back on exit• Interactive user outputs• Internal, secondary data

»updated global or class variables• External data (sinks)

»output file or database status variables»output file or database data

• Exceptions

IT PAYS TO BE A GOOD RESPONSE DETECTIVE

• Failure to identify responses results in » incomplete understanding of the software

under test

»shallow test cases

» incomplete expected results

» incomplete test "success" verification -- certain effects not checked

• To test, one must know all the effects

A SKETCHING TOOL Black-Box Schematic

Stimulus Type Response Type

SoftwareunderTest

Argument

Inputs

Globals

Database

Exception

Argument

Outputs

Globals

Database

Exception

BEFORE CONTINUTINGMuch of the discussion so far involves how to identify what software does. We have introduced thinking tools for systematically capturing our findings. These thought processes and tools can be used anywhere in the lifecycle, e.g., in software design!

One Stone for Two Birds!!

Specialist I - Competencies

Practitioner

Test Designer

Test Analyst

Test Inspector

Test Environmentalist

Test SPECIALIST

1

Test Builder 1

1

1

1

1

1

2

2

2

2

2

2

2

3

3

3

3

3

3

3

4

4

4

4

4

4

4

5

5

5

5

5

5

5

Test Practitioner ...

...

...

...

...

...

...

BOUNDARY TESTING DESIGN METHODOLOGY

• Specification

• Identify elementary boundary conditions

• Identify boundary points

• Generate boundary test cases

• Update test script (add boundary cases).

EXAMPLE: Pay Calculation(1) Specification

• Compute pay for employee, given the number of hours worked and the hourly pay rate. For hourly employees (rate < 30), compute overtime at 1.5 times hourly rate for hours in excess of 40. Salaried employees (rate >= 30) are paid for exactly 40 hours.

EXAMPLE B(2) Identify Behaviors

• Case 1: Hourly AND No overtime » (Rate < 30) & (Hours <= 40)» Expect Pay = Hours * Rate

• Case 2: Hourly AND Overtime » (Rate < 30) & (Hours > 40)» Expect Pay = 40*Rate+1.5*Rate*(Hours - 40)

• Case 3: Salaried (Rate >= 30) » Expect Pay = 40 * Rate

DECISION TABLE

Condition

c1: Rate < 30 | Y Y N N

c2: Hours <= 40 | Y N Y N

Action

a1: Pay = Straight time | X

a2: Pay = Overtime | X

a3: Pay = Professional | X X

Columns defineBehaviors

EXAMPLE B(3) Create Test Cases

• One test case per column of decision table

» Case 1: Hourly, No Overtime

» Case 2: Hourly, Overtime

» Case 3: Salaried, No Extra Hours

» Case 4: Salaried, Extra Hours

• Order the test cases by column

EXAMPLE B(4) Write Test Script

StepStimuli Expected Response

Hours Rate Pay =

1

2

30

50 10

10 300

550

3 30 40 1600

4 50 40 1600

Testing Modules -- Drivers

A test driver executes a unit with test case data and captures the results.

Driver

Test setData External

EffectsUnitArguments

ResultsTest Set Results

Implementing Test Drivers

Complexity Arguments/Results only Special set-up required to execute unit External effects capture/inquiry Oracle announcing "PASS"/"FAIL"

Major Benefits Automated, repeatable test script Documented evidence of testing Universal design pattern

Test Driver for Unit Pay

Driver D_payuses unit_environment E;{ declare Hrs, Rate, expected; testcase_no = 0; open tdi_file("tdi-pay.txt"); open trs_file("trs-pay.txt"); while (more data in tdi_file) { read(tdi_file, Hrs, Rate); read(tdi_file, expected);

testresult = pay(Hrs, Rate); write (trs_file, testcase_no++, Hrs, Rate, expected, testresult); }//while close tdi_file, trs_file; }//driver

Test Driver Files (Pay)

Test Data FileFile name: tdi-pay.txtFormat: (test cases only) rate hours expected-pay

File content:10 40 400 10 50 55010 0 0

------Note: No environment setup.

Test Results FileFile name: trs-pay.txtFormat: case# rate hours exp-pay act-pay

File content:1 10 40 400 4002 10 50 550 5003 10 0 0 0

------Note: Results file must be inspected for failures.

PassFailPass

Test Script!!

Testing Classes -- Drivers(Black-Box)

ClassTest

Driver Method Args/Results

Test setData

Test Set Results

Class

Class-state

Method(s)

Example -- Stack Class

class Stack{ public: Stack(); void push(int n); int pop(); int top(); bool empty(); private: int Size; int Top; int Values[100];};

Notes:

(1) Class state -- variables Size, Top and the first 'Size' values in array Values.(2) Methods push and pop modify class state; top and empty inquire about the state.(3) Stack does not require any test environment of its own.(4) Class state HIDDEN from test, i.e., black box.

Test Driver Files (Stack class)

Test Data File (tdi-stack.txt)File content:----- 1 8 1 7 3 7 2 7 2 8 4 true------Note: No test environment setup.Methods: 1-push, 2-pop, 3-top, 4-empty

Test Results File (trs-stack.txt)File content:-----1 1 8 2 1 73 3 7 84 2 7 85 2 8 76 4 1------Note: Results file must be inspected for pass/fails.

--- Top should be 7.

--- Push .

--- Pop, should be 8.--- Stack should be empty.

FailFailFailPass