slides

31
CS 160: Software Engineering April 8 Class Meeting Department of Computer Science San Jose State University Spring 2008 Instructor: Prof. Ron Mak http:// www.cs.sjsu.edu/~mak

Upload: softwarecentral

Post on 13-May-2015

432 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: slides

CS 160: Software EngineeringApril 8 Class Meeting

Department of Computer ScienceSan Jose State University

Spring 2008Instructor: Prof. Ron Mak

http://www.cs.sjsu.edu/~mak

Page 2: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

2

Unofficial Field Trip? Computer History Museum in Mt. View.

See http://www.computerhistory.org/ You provide your own transportation to and from the museum. Completely voluntary, no credit.

Approximately 2 hours during a Saturday afternoon later this month or early next month. Docent-led tour of various computer systems and artifacts from

the 1940s through the 1990s. Eniac, Enigma, SAGE, IBM 360, Cray supercomputers, etc.

See, hear, and touch an actual IBM 1401 computer system (c. 1959-1965) that has been restored and is operational: 800 cpm card reader flashing console lights spinning tape drives with vacuum columns 600 lpm line printer

Page 3: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

3

Software Reliability

Reliable software has a low probability of failure while operating for a specified period of time under specified operating conditions.

The specified time period and the specified operating conditions are part of the nonfunctional requirements.

For some applications, reliability may be more important than the other functional and nonfunctional requirements. mission-critical applications medical applications

Page 4: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

4

Software Reliability (cont’d)

Reliable software is the result of good software quality assurance (SQA) throughout an application’s life cycle.

Design good requirements elicitation good object-oriented design and analysis good architecture

Development good management (e.g., source control, reasonable schedules) good coding practices (e.g., design patterns) good testing practices

Deployment good preventive maintenance (e.g., training) good monitoring good failure analysis (when failures do occur)

Page 5: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

5

Software Testing

Some key questions: What is testing? What is a successful test? Who does testing? When does testing occur? What are the different types of testing? What testing tools are available? How do you know your tests covered everything? When can you stop testing?

Page 6: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

6

What is testing?

Testing is a systematic procedure to discover faults in software in order to prevent failure.

Failure: A deviation of the software’s behavior from its specified behavior (as per its requirements) Can be minor to major (such as a crash)

Erroneous state: A state that the operating software is in that will lead to a failure. Example: low on memory

Fault: What caused the software to enter an erroneous state. Also known as: defect, bug Example: a memory leak

Page 7: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

7

What is a successful test?

Testing is the opposite of coding. Coding: Create software and try to get it to work. Testing: Break the software and demonstrate that it doesn’t

work.

Testing and coding require different mind sets. It can be very difficult for developers to test their own code. If you wrote the code, you psychologically want it to work and

not see it break. Since you know how the code should be used, you may not

think to try using it in ways other than as you intended.

A successful test is one that finds bugs.

Page 8: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

8

Who does testing?

Developers As difficult as it may be, you must test your own code. Test each other’s code (peer testing)

test interfaces

Users

Testers Members of the Testing or Quality Assurance (QA)

department. Software engineers who did not write the code. Manual writers and trainers who create examples and demos.

Page 9: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

9

When does testing occur?

Recall the Old Waterfall Model:

Requirements

Design

Implementation

Testing

In the new Agile Methodology, testing is part of each and every iteration. Therefore, testing occurs throughout development,

not just at the end.

XXX

Page 10: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

10

What are the different types of testing? Usability testing

Developers and users test the user interface.

Unit testing Developers test an individual “unit”. Unit: A small set of related components, such as to implement

a use case.

Integration testing Developers test how their units work together with other units.

System testing Test how an entire system works. Includes performance testing and stress testing.

Page 11: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

11

Alpha Testing vs. Beta Testing

Alpha testing Usability and system testing of a nearly complete

application in the development environment.

Beta testing Usability and system testing of a complete or nearly

complete application in the user’s environment.

Today, it is not uncommon to release an application to the public for beta testing. New releases of web-based applications are put

“into beta” by software companies.

Page 12: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

12

Usability Testing: The German Newton

The Apple Newton was an early PDA developed in the early 1990s. Besides the English version, there was a German and a Japanese

version. It was too far ahead of its time and was killed by Steve Jobs after he

returned to Apple.

A key feature of the Newton was handwriting recognition: ‹‹Handschrifterkennung›› The Newton recognized successive words in a sentence using an

algorithm that tracked the movement of the stylus.

Cultural differences between Americans and Germans The way the Germans write their letters (e.g., the letter h). The way the Germans write words

(e.g., ‹‹Geschäftsreise›› “business trip”). Philosophy about personal handwriting.

Page 13: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

13

Usability Testing: NASA Mars Rover Mission

Usability testing for the Collaborative Information Portal (CIP) software system.

NASA’s Jet Propulsion Laboratory (JPL) conducted several Operational Readiness Tests (ORTs) before the actual rovers landed. A working rover was inside of a large “sandbox” in a separate

warehouse building at JPL. Mission control personnel communicated with and operated the

sandbox rover as if it were on Mars. A built-in delay simulated the signal travel time between Earth and Mars.

Mission scientists accessed and analyzed the data and images downloaded by the sandbox rover.

All mission software, including CIP, was intensely tested in this simulated environment.

Page 14: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

14

Unit Testing

Each unit test focuses on components created by a developer. Should be done by the developer before checking in

the code. Easier to find and fix bugs when there are fewer

components. Bottom-up testing.

Developers create test cases to do unit tests.

Page 15: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

15

Unit Testing: Test Cases

Test case: A set of input values for the unit and a corresponding set of expected output values. Use case → test case Do unit testing on the participating objects of the use case.

A test case can be run within a testing framework (AKA test bed, test harness) consisting of a test driver and one or more test stubs. Test driver: Simulates the part of the system that calls the unit.

Calls the unit and passes in the input values. Test stub: Simulates the components that the unit depends on.

When called by the unit, a test stub responds in a “reasonable” manner.

Page 16: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

16

Unit Testing: Testing Framework

UNITTESTDRIVER

TESTSTUB

TESTSTUB

TESTSTUB

Page 17: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

17

Black Box Testing vs. White Box Testing

Black box testing Deals only with the input/output behavior of the unit. The internals of the unit are not considered.

White box testing Tests the internal behavior of the unit.

execution paths state transitions

Page 18: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

18

Unit Testing: Equivalence Testing

Minimize the number of test cases by partitioning the possible inputs into equivalence classes. Assumption: For each equivalence class, the unit

behaves the same way for any input from the class. Example: A calendar unit that takes a year and a

month as input. Year equivalence classes: leap years and non-leap years Month equivalence classes: months with 30 days, months

with 31 days, and February (28 or 29 days)

Black box test.

Page 19: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

19

Unit Testing: Boundary Testing

Test the boundaries (the extremes, or “edges”) of the equivalence classes.

Calendar component examples: Month 0 and month 13. Months with the incorrect number of days. Invalid leap years (e.g., not a multiple of 4)

Black box test.

Page 20: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

20

Unit Testing: Monte Carlo Testing

Monte Carlo (a famous casino in Monaco) refers to the random generation of input values.

Used when the input values for a unit are too numerous and cannot be partitioned into equivalence classes.

The test driver generates random values from the input domain and passes them to the unit.

Any erroneous output narrows your search for the bug.

Black box test.

Page 21: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

21

Unit Testing: Path Testing

Analyze the execution paths within the unit. Draw a flow graph.

Generate input values to force the unit to follow each and every path at least once.

White box test.

Page 22: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

22

Unit Testing: State Transition Testing

Similar to path testing.

Analyze the states that the unit can be in. Draw a UML statechart diagram.

Generate input data that forces the unit to transition into each and every state at least once.

White box test.

Page 23: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

23

Unit Testing Tool: JUnit

JUnit: A unit testing framework for Java components. A member of the xUnit family of testing frameworks

for various programming languages.

Open source: Download fromhttp://junit.sourceforge.net/

JUnit demo.

Page 24: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

24

JUnit Demo Class to test:

package cs160.util;

public class Calculator{ public double add(double first, double second) { return first + second; } public double subtract(double first, double second) { return second - first; // oops! } public double multiply(double first, double second) { return first * second; }}

Page 25: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

25

JUnit Demo (cont’d) Unit test cases:

package cs160.test;

import cs160.util.Calculator;import junit.framework.TestCase;

public class CalculatorTester extends TestCase{ public void testAdd() { Calculator calculator = new Calculator(); double result = calculator.add(40, 30); assertEquals(70.0, result, 0.0); } public void testSubtract() { Calculator calculator = new Calculator(); double result = calculator.subtract(40, 30); assertEquals(10.0, result, 0.0); }}

Page 26: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

26

Unit Testing Tool: Clover

Clover: A unit testing tool that tells you what parts of your code you have actually executed (“covered”).

Unfortunately, not open source. Download a 30-day trial from

http://www.atlassian.com/software/clover/

Clover demo.

Page 27: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

27

Integration Testing

After your code has passed all the unit tests, you must do integration testing to see how your unit works with other units Other units either written by you or by other

developers.

Create an Ant script that builds the part of the application that contains: Your unit The units that call your unit The units that your unit calls

Page 28: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

28

Regression Tests

Sad fact of life: When you fix one bug, you might very well introduce one or more new bugs. Tests that have passed previously may now show

failures – the system has regressed.

Regression tests: Re-run earlier integration tests to make sure they still pass. Do this every time you make a change. Create Ant scripts to run the tests. Schedule regression tests to run periodically

automatically, e.g., overnight every night.

Page 29: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

29

System Testing

Test the entire application within its operating environment. Installation testing Functional testing Performance testing

Part of beta testing.

Acceptance testing. Signoff by stakeholders, clients, and customers.

Page 30: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

30

When can you stop testing?

“Testing can prove the presence of bugs,but never their absence.”

Stop testing when: All the regression tests pass. Testing finds only “acceptable” bugs.

put on the Known Bugs list have workarounds

When you’ve run out of time.

Page 31: slides

Department of Computer ScienceSpring 2008: April 8

CS 160: Software Engineering© R. Mak

31

Next class meeting…

Logging and monitoring Stress testing Test-Driven Development (TDD) Failure analysis Creating test plans