designckverma.com/book/design.doc · web viewsince solution to complex problems isn’t usually...

47
Written By: Chander Kant Lesson 5 System Design Software Design Fundamentals includes Design Objective Design Principle Design Process Design Objectives The design document that we will develop during this phase is the blueprint of the software. It describes how the solution to the customer problem is to be built. Since solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software design as well. For this reason, any design strategy, design method, or design

Upload: trinhthuan

Post on 01-Apr-2018

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

Written By:

Chander Kant

Lesson 5

System Design

Software Design Fundamentals includes

Design Objective

Design Principle

Design Process

Design Objectives

The design document that we will develop during this phase is the blueprint of the software. It

describes how the solution to the customer problem is to be built. Since solution to complex

problems isn’t usually found in the first try, iterations are most likely required. This is true for

software design as well. For this reason, any design strategy, design method, or design language

must be flexible and must easily accommodate changes due to iterations in the design.

Any complex problems aren’t usually solved in one step. Rather, they are solved using the

principle of “divide and conquer,” where the problem is divided into a set of sub-problems that are

solved more easily. The partial solutions are than combined to become an overall problem

solution. Any technique or design needs to support and guide the partitioning process in such a

way that the resulting sub-problems are as independent as possible from each other and can be

Page 2: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

combined easily for the solution to the overall problem. Sub-problem independence and easy

combination of their solutions reduces the complexity of the problem. This is the objective of the

partitioning process. Partitioning or decomposition during design involves three types of

decisions:

Define the boundaries along which to break;

Determine into how money pieces to break; and

Identify the proper level of detail when design should stop and implementation should start.

Basic design principles that enable the software engineer to navigate the design process suggest a

set of principles for software design, which have been adapted and extended in the following list:

Free from the suffer from "tunnel vision." A good designer should consider alternative

approaches, judging each based on the requirements of the problem, the resources available to

do the job.

The design should be traceable to the analysis model. Because a single element of the design

model often traces to multiple requirements, it is necessary to have a means for tracking how

requirements have been satisfied by the design model.

The design should not repeat the same thing. Systems are constructed using a set of design

patterns, many of which have likely been encountered before. These patterns should always

be chosen as an alternative to reinvention. Time is short and resources are limited! Design

time should be invested in representing truly new ideas and integrating those patterns that

already exist.

The design should "minimize the intellectual distance" between the software and the problem

as it exists in the real world. That is, the structure of the software design should (whenever

possible) mimic the structure of the problem domain.

The design should exhibit uniformity and integration. A design is uniform if it appears that

one person developed the entire thing. Rules of style and format should be defined for a

design team before design work begins. A design is integrated if care is taken in defining

interfaces between design components.

The design activity begins when the requirement document for the software to be developed is

available. This may be the SRS for the complete system, as is the case if the waterfall model is

Page 3: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

being followed or the requirements for the next "iteration" if the iterative enhancement is being

followed or the requirements for the prototype if the prototyping is being followed. While the

requirement specification activity is entirely in the problem domain, design is the first step in

moving from the problem domain toward the solution domain. Design is essentially the bridge

between requirement specification and the final solution for satisfying the requirements.

Design Principles

Fundamental concepts and specific techniques characterize every intellectual discipline.

Techniques are the manifestations of the concepts as they apply to a particular situation.

Techniques come and go with changes in technology; intellectual fades economic conditions, and

social concerns. By defining, fundamental principles remain the same throughout. They provide

the underline basis for development and evaluation of techniques. Fundamental concepts of

software design include abstraction, structure, information hiding, modularity, concurrency,

verification, and design aesthetics.

The design of a system is correct, if a system built precisely according to the design satisfies the

requirements of that system. Clearly, the goal during the design phase is to produce correct design.

However, correctness is not the sole criterion during the design phase, as there can be many

correct designs. The goal of the design process is not simply to produce a design for the system.

Instead, the goal is to find the best possible design within the limitations imposed by the

requirements and the physical and social environment in which the system will operate.

To evaluate a design, we have to specify some properties and criteria that can be used for

evaluation. Ideally, these properties should be as quantitative as possible. In that situation, we can

mathematically evaluate the "goodness" of a design and use mathematical techniques to formally

determine the best design. However, as with many objects in the real world, criteria for quality of

software design are often subjective or non-quantifiable. In such a situation, criteria are essentially

thumb rules that aid design evaluation.

A design should clearly be verifiable, complete (implements all the specifications), and traceable

(all design elements can be traced to some requirements). However, the two most important

properties that concern designers are efficiency and simplicity. Efficiency of any system is

concerned with the proper use of scarce resources by the system. The need for efficiency arises

due to cost considerations. If some resources are scarce and expensive, it is desirable that those

Page 4: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

resources be used efficiently. In computer systems, the resources that are most often considered

for efficiency are processor time and memory. An efficient system is one that consumes less

processor time and requires less memory. In earlier days, the efficient use of CPU and memory

was important due to the high cost of hardware. Now that the hardware costs are small compared

to the software costs, for many software systems, traditional efficiency concerns now take a back

seat compared to other considerations. One of the exceptions is real-time system, where there are

strict execution time constraints.

The Design Process

It presents the following guidelines:

1. A design should exhibit an architectural structure that (a) has been created using recognizable

design patterns, (b) is composed of components that exhibit good design characteristics, and

(c) can be implemented in an evolutionary fashion, thereby facilitating implementation and

testing.

2. A design should be modular; that is, the software should be logically partitioned into

elements that perform specific functions and sub-functions.

3. A design should contain distinct representations of data, architecture, interfaces, and

components (modules).

5. A design should lead to data structures that are appropriate for the objects to be implemented

and are drawn from recognizable data patterns.

5. A design should lead to components that exhibit independent functional characteristics.

6. A design should aid to interfaces that reduce the complexity of connections between modules

and with the external environment.

7. A design should be derived using a repeatable method that is driven by information obtained

during software requirement analysis.

These criteria are not achieved by chance. The software design process encourages good design

through the application of fundamental design principles, systematic methodology and thorough

review.

Page 5: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

Problem Partitioning and Hierarchy

When solving a small problem, the entire problem can be tackled at once. The complexity of large

problems and the limitations of human minds do not allow large problems to be treated as huge

monoliths. For solving larger problems, the basic principle is the time-tested principle of "divide

and conquer." Clearly, dividing in such a manner that all the divisions have to be conquered

together is not the intent of this wisdom. This principle, if elaborated, would mean, "divide into

smaller pieces, so that each piece can be conquered separately."

For software design, therefore, the goal is to divide the problem into manageably small pieces that

can be solved separately. It is this restriction of being able to solve each part separately that makes

dividing into pieces a complex task and that many methodologies for system design aim to

address. The basic rationale behind this strategy is the belief that if the pieces of a problem are

solvable separately, the cost of solving the entire problem is more than the sum of the cost of

solving all the pieces.

However, the different pieces cannot be entirely independent of each other, as they together form

the system. The different pieces have to cooperate and communicate to solve the larger problem.

This communication adds complexity, which arises due to partitioning and may not have existed in

the original problem. As the number of components increases, the cost of partitioning, together

with the cost of this added complexity, may become more than the savings achieved by

partitioning. It is at this point that no further partitioning needs to be done. The designer has to

make the judgment about when to stop partitioning.

Two of the most important quality criteria for software design are simplicity and

understandability. It can be argued that maintenance is minimized if each part in the system can be

easily related to the application and each piece can be modified separately. If a piece can be

modified separately, we call it independent of other pieces. If module A is independent of module

B, then we can modify A without introducing any unanticipated side effects in B. Total

independence of modules of one system is not possible, but the design process should support as

much independence as possible between modules. Dependence between modules in a software

system is one of the reasons for high maintenance costs. Clearly, proper partitioning will make the

system easier to maintain by making the design easier to understand. Problem partitioning also

aids design verification.

Page 6: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

Problem partitioning, which is essential for solving a complex problem, leads to hierarchies in the

design. That is, the design produced by using problem partitioning can be represented as a

hierarchy of components. The relationship between the elements in this hierarchy can vary

depending on the method used. For example, the most common is the "whole-part of" relationship.

In this, the system consists of some parts; each part consists of subparts, and so on. This

relationship can be naturally represented as a hierarchical structure between various system parts.

In general, hierarchical structure makes it much easier to comprehend a complex system. Due to

this, all design methodologies aim to produce a design that has nice hierarchical structures.

Abstraction

It is a tool that permits a designer to consider a component at an abstract level without worrying

about the details of the implementation of the component

Abstraction is used for existing components as well as components that are being designed.

Abstraction of existing components plays an important role in the maintenance phase. To modify a

system, the first step is understanding what the system does and how. The process of

comprehending an existing system involves identifying the abstractions of sub-systems and

components from the details of their implementations. Using these abstractions, the behavior of

the entire system can be understood. This also helps determine how modifying a component

affects the system.

Control abstraction is another form of abstraction used in software design. Like procedural and

data abstraction, control abstraction implies a program control mechanism without specifying

internal details. An example of a control abstraction is the synchronization semaphore used to

coordinate activities in an operating system.

There are two common abstraction mechanisms for software systems: functional abstraction and

data abstraction.

In functional abstraction, a module is specified by the function it performs. For example, a

module to compute the log of a value can be abstractly represented by the function log. Similarly,

a module to sort an input array can be represented by the specification of sorting. Functional

abstraction is the basis of partitioning in function-oriented approaches. That is, when the problem

is being partitioned, the overall transformation function for the system is partitioned into smaller

Page 7: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

functions that comprise the system function. The decomposition of the system is in terms of

functional modules.

The second unit for abstraction is data abstraction. Any entity in the real world provides some

services to the environment to which it belongs. Often the entities provide some fixed predefined

services. The case of data entities is similar. Certain operations are required from a data object,

depending on the object and the environment in which it is used. Data abstraction supports this

view. Data is not treated simply as objects, but is treated as objects with some predefined

operations on them. The operations defined on a data object are the only operations that can be

performed on those objects. From outside an object, the internals of the object are hidden; only the

operations on the object are visible. Data abstraction forms the basis for object-oriented design,

which is discussed in the next chapter. In using this abstraction, a system is viewed as a set of

objects providing some services. Hence, the decomposition of the system is done with respect to

the objects the system contains.

Effective Modular Design

Modularity

The real power of partitioning comes if a system is partitioned into modules so that the modules

are solvable and modifiable separately. It will be even better if the modules are also separately

compilable (then, changes in a module will not require recompilation of the whole system). A

system is considered modular if it consists of discreet components so that each component can be

implemented separately, and a change to one component has minimal impact on other

components.

Modularity is a clearly a desirable property in a system. Modularity helps in system debugging.

Isolating the system problem to a component is easier if the system is modular. In system repair,

hanging a part of the system is easy as it affects few other parts and in system building, a modular

system can be easily built by "putting its modules together."

A software system cannot be made modular by simply chopping it into a set of modules. For

modularity, each module needs to support a well-defined abstraction and have a clear interface

through which it can interact with other modules. Modularity is where abstraction and partitioning

Page 8: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

come together. For easily understandable and maintainable systems, modularity is clearly the basic

objective; partitioning and abstraction can be viewed as concepts that help achieve modularity.

Top-Down and Bottom-Up Strategies

A system consists of components, which have components of their own; indeed, a system is a

hierarchy of components. The highest-level component corresponds to the total system. To design

such a hierarchy there are two possible approaches: top-down and bottom-up.

The top-down approach starts from the highest-level component of the hierarchy and proceeds to

lower levels. By contrast, a bottom-up approach starts with the lowest-level component of the

hierarchy and proceeds through progressively higher levels to the top-level component.

A top-down design approach starts by identifying the major components of the system,

decomposing them into their lower-level components and iterating until the desired level of detail

is achieved. Top-down design methods often result in some form of stepwise refinement. Starting

from an abstract design, in each step, the design is refined to a more concrete level, until we reach

a level where no more refinement is needed and the design can be implemented directly. The top

down approach has been promulgated by many researchers and has been found to be extremely

useful for design. Most design methodologies are based on the top-down approach.

A bottom-up design approach starts with designing the most basic or primitive components and

proceeds to higher-level components that use these lower-level components. Bottom-up methods

work with layers of abstraction. Starting from the very bottom, operations that provide a layer of

abstraction are implemented. The operations of this layer are then used to implement more

powerful operations and a still higher layer of abstraction, until the stage is reached where the

operations supported by the layer are those desired by the system.

A top-down approach is suitable only if the specifications of the system are clearly known and the

system development is from scratch. Hence, it is a reasonable approach if a waterfall type of

process model is being used. However, if a system is to be built from an existing system, a bottom-

up approach is more suitable, as it starts from some existing components. So, for example, if an

iterative enhancement type of process were being followed, in later iterations, the bottom-up

approach would be more suitable (in the first iteration, a top-down approach can be used.)

Pure top-down or pure bottom-up approaches are, often, not practical. For a bottom- up approach

to be successful, we must have a good notion of the top to which the design should be heading.

Page 9: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

Without a good idea about the operations needed at the higher layers, it is difficult to determine

what operations the current layer should support. Top-down approaches require some idea about

the feasibility of the components specified during design. The components specified during design

should be implementable, which requires some idea about the feasibility of the lower-level parts of

a component. A common approach to combine the two approaches is to provide a layer of

abstraction for the application domain of interest through libraries of functions, which contains the

functions of interest to the application domain. Then, use a top-down approach to determine the

modules in the system, assuming that the abstract machine available for implementing the system

provides the operations supported by the abstraction layer. This approach is frequently used for

developing systems. It can even be claimed that it is almost universally used these days, as most

developments now make use of the layer of abstraction supported in a system consisting of the

library functions provided by operating systems, programming languages, and special-purpose

tools.

Module-Level Concepts

A module is a logically separable part of a program. It is a program unit that is discreet and

identifiable with respect to compiling and loading. In terms of common programming language

constructs, a module can be a macro, a function, a procedure (or subroutine), a" process, or a

package. In systems using functional abstraction, a module is usually a procedure of function or a

collection of these.

To produce modular designs, some criteria must be used to select modules so that the modules

support well-defined abstractions and are solvable and modifiable separately. In a system using

functional abstraction, coupling and cohesion are two modularization criteria, which are often used

together.

Functional Independence

The concept of functional independence is a direct outgrowth of modularity and the concepts of

abstraction and information hiding.

Functional independence is achieved by developing modules with "single-minded' function and an

"aversion" to excessive interaction with other modules. Stated another way, we want to design

software so that each module addresses a specific sub-function of requirements and has a simple

interface, when viewed from other parts of the program structure. It is fair to ask, why

Page 10: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

independence is important. Software with effective modularity, that is, independent modules, is

easier to develop because functions may be compartmentalized and interfaces are simplified

(consider the ramifications when development is conducted by a team). Independent modules are

easier to maintain (and test) because secondary effects caused by design or code modification are

limited, error propagation is reduced, and reusable modules are possible. To summarize, functional

independence is a key to good design, and design is the key to software quality.

Coupling

Coupling is the measure of degree of interdependency between modules. A good design should

have low coupling.

Two modules are considered independent if one can function completely without the presence of

other. Obviously, if two modules are independent, they are solvable and modifiable separately.

However, all the modules in a system cannot be independent of each other, as they must interact so

that together, they produce the desired external behavior of the system. The more connections

between modules, the more dependent they are in the sense that more knowledge about one

module is required to understand or solve the other module. Hence, the fewer and simpler the

connections between modules, the easier it is to understand one without understanding the other.

The notion of coupling attempts to capture this concept of "how strongly" different modules are

interconnected.

Uncoupled Modules Loosely Coupled Tightly Coupled

No Dependency Low Dependency Higher Dependency

Coupling between modules is the strength of interconnections between modules or a measure of

interdependence among modules. In general, the more we must know about module A, in order to

understand module B, the more closely connected A is to B. "Highly coupled" modules are joined

Page 11: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

by strong interconnections, while "loosely coupled" modules have weak interconnections.

Independent modules have no interconnections. To solve and modify a module separately, we

would like the module to be loosely coupled with other modules. The choice of modules decides

the coupling between modules. Because the modules of the software system are created during

system design, the coupling between modules is largely decided during system design and cannot

be reduced during implementation.

Coupling is an abstract concept and is not easily quantifiable. So, no formulas can be given to

determine the coupling between two modules. However, some major factors can be identified as

influencing coupling between modules. Among them the most important are, the type of

connection between modules, the complexity of the interface, and the type of information flow

between modules.

Coupling increases with the complexity and obscurity of the interface between modules. To keep

coupling low, we would like to minimize the number of interfaces per module and the complexity

of each interface. An interface of a module is used to pass information to and from other modules.

Coupling is reduced if only the defined entry interface of a module is used by other modules (for

example, passing information to and from a module, exclusively through parameters). Coupling

would increase if a module were used by other modules via an indirect and obscure interface, like

directly using the internals of a module or using shared variables.

The type of information flow along the interfaces is the third major factor-affecting coupling.

There are two kinds of information that can flow along an interface: data or control. Passing or

receiving control information means that the action of the module will depend on this control

information, which makes it more difficult to understand the module and provide its abstraction.

Transfer of data information means that a module passes some data as input to another module and

gets in return some data as output. This allows a module to be treated as a simple input- output

function that performs some transformation on the input data to produce the output data. In

general, interfaces with only data communication result in the lowest degree of coupling, followed

by interfaces that only transfer control data. Coupling is considered highest if the data is hybrid,

that is, some data items and some control items are passed between modules.

Page 12: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

Cohesion

The internal cohesion of a module is measured in terms of the strength of binding elements within

the module. Cohesion of elements occurs on the scale of weakest (least desirable) to strongest

(most desirable).

No Cohesion Low Cohesion High Cohesion

No dependency between Low dependency between High dependency between

Module elements module elements module elements

We have seen that coupling is reduced, when the relationships among elements, in different

modules are minimized. That is, coupling is reduced when elements in different modules have

little or no bonds between them. Another way of achieving this effect is to strengthen the bond

between elements of the same module by maximizing the relationship between elements of the

same module. Cohesion is the concept that tries to capture this intramodule. With cohesion, we are

interested in determining how closely the elements of a module are related to each other.

Cohesion of a module represents how tightly the internal elements of the module are bound to one

another. Cohesion of a module gives the designer an idea about whether the different elements of a

module belong together in the same module. Cohesion and coupling are clearly related. Usually,

the greater the cohesion of each module in the system, the lower the coupling between modules is.

This correlation is not perfect, but it has been observed in practice. There are several levels of

cohesion:

Page 13: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

Coincidental Low (Worst)

Logical

Temporal

Procedural

Communicational

Sequential

Functional High (Best)

Coincidental is the lowest level, and functional is the highest. These levels do not form a linear

scale. Functional binding is much stronger than the rest, while the first two are considered much

weaker than others. Often, many levels can be applicable when considering cohesion between two

elements of a module. In such situations, the highest level is considered. Cohesion of a module is

considered the highest level of cohesion applicable to all elements in the module.

Coincidental cohesion occurs when there is no meaningful relationship among the elements of a

module. Coincidental cohesion can occur if an existing program is "modularized", by chopping it

into pieces and making different piece modules. If a module is created to save duplicate code by

combining some part of code that occurs at many different places, that module is likely to have

coincidental cohesion. In this situation, the statements in the module have no relationship with

each other, and if one of the modules using the code needs to be modified and this modification

includes the common code, it is likely that other modules using the code do not want the code

modified. Consequently, the modification of this "common module" may cause other modules to

behave incorrectly.

A module has logical cohesion if there is some logical relationship between the elements of a

module, and the elements perform functions that fall in the same logical class. A typical example

of this kind of cohesion is a module that performs all the inputs or all the outputs. In such a

situation, if we want to input or output a particular record, we have to somehow convey this, to the

module. Often, this will be done by passing some kind of special status flag, which will be used to

determine what statements to execute in the module. Besides, resulting in hybrid information flow

between modules, which is generally the worst form of coupling between modules, such a module

Page 14: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

will usually have tricky and clumsy code. In general, logically cohesive modules should be

avoided, if possible.

Temporal cohesion is the same as logical cohesion, except that the elements are also related in

time and are executed together. Modules that perform activities like "initialization," "clean-up,"

and "termination" are usually temporally bound. Even though the elements in a temporally bound

module are logically related, temporal cohesion is higher than logical cohesion, because the

elements are all executed together. This avoids the problem of passing the flag, and the code is

usually simpler.

A procedurally cohesive module contains elements that belong to a common procedural unit. For

example, a loop or a sequence of decision statements in a module may be combined to form a

separate module. Procedurally cohesive modules, often, occur when modular structure is

determined from some form flowchart, Procedural cohesion, often, cuts across functional lines. A

module with only procedural cohesion may contain only part of a complete function or parts of

several functions.

When the elements are together in a module, because the output of one forms the input to another,

we get sequential cohesion. If we have a sequence of elements in which the output of one forms

the input to another, sequential cohesion does not provide any guidelines on how to combine them

into modules. Different possibilities exist: combine all in one module, put the first half in one and

the second half in another, the first third in one and the rest in the other, and so forth.

Consequently, a sequentially bound module may contain several functions or parts of different

functions. Sequentially, cohesive modules bear a close resemblance to the problem structure.

However, they are considered to be far from the ideal, which is functional cohesion.

Functional cohesion is the strongest cohesion. In a functionally bound module, all the elements of

the module are related to performing a single function. By function, we do not mean simply

mathematical functions; modules accomplishing a single goal are also included. Functions like

"compute square root" and "sort the array" are clear examples of functionally cohesive modules.

How does one determine the cohesion level of a module? There is no mathematical formula that

can be used. We have to use our judgment for this. A useful technique for determining if a module

has functional cohesion is to write a sentence that describes, fully and accurately, the function or

purpose of the module. The following tests can, then, be made:

Page 15: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

1. If the sentence must be a compound sentence, if it contains a comma, or it has more than one

verb, the module is probably performing more than one function, and it probably has

sequential or communicational cohesion.

2. If the sentence contains words relating to time, like "first," "next," "when," and "after", the

module, probably, has sequential or temporal cohesion.

3. If the predicate of the sentence does not contain a single specific object following the verb

(such as "edit all data"), the module probably has logical cohesion.

5. Words like "initialize," and "cleanup" imply temporal cohesion.

Modules with functional cohesion can always be described by a simple sentence. However, if a

description is a compound sentence, it does not mean that the module does not have functional

cohesion. Functionally cohesive modules can also be described by compound sentences. If we

cannot describe it using a simple sentence, the module is not likely to have functional cohesion.

Page 16: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

Design Methodologies

Structured Design Methodology

Object-Oriented Methodology

Creating the software system design is the major concern of the design phase. Many design

techniques have been proposed over the years to provide some discipline in handling the

complexity of designing large systems. The aim of design methodologies is not to reduce the

process of design to a sequence of mechanical steps but to provide guidelines to aid the designer

during the design process. Here, we describe the structured design methodology for developing

system designs.

Structured design methodology (SDM) views every software system as having some inputs that

are officered into the desired outputs by the software system. The software is viewed as a

transformation function that transforms the given inputs into the desired outputs, and the central

problem of designing software systems is considered to be properly designing this transformation

function. Due to this view of software, the structured design methodology is primarily function-

oriented and relies heavily on functional abstraction and functional decomposition.

The concept of the structure of a program lies at the heart of the structured design method. During

design, structured design methodology aims to control and influence the structure of the final

program. The aim is to design a system so that programs implementing the design would have a

nice hierarchical structure, with functionally cohesive modules and as few interconnections

between modules, as possible.

The overall strategy is to identify the input and output streams and the primary transformations

that have to be performed to produce the output. High-level modules are, then, created to perform

these major activities, which are later refined. There are four major steps in this strategy:

1. Restate the problem as a data flow diagram.

2. Identify the input and output data elements.

3. First-level factoring.

4. Factoring of input, output, and transform branches.

Page 17: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

We will now discuss each of these steps in more detail. For illustrating each step of the

methodology, as we discuss them, we consider the following problem: there is a text file

containing words separated by blanks or new lines. We have to design a software system to

determine the number of unique words in the file.

Restate the Problem as a Data Flow Diagram

To use the SD methodology, the first step is to construct the data flow diagram for the problem.

However, there is a fundamental difference between the DFDs drawn during requirements analysis

and during structured design. In the requirements analysis, a DFD is drawn to model the problem

domain. The analyst has little control over the problem, and hence, his task is to extract from the

problem all the information and then represent it as a DFD.

During design activity, we are no longer modeling the problem domain, but rather are dealing with

the solution domain and developing a model for the eventual system. That is, the DFD during

design represents how the data will flow in the system when it is built. In this modeling, the major

transforms or functions in the software are decided, and the DFD shows the major transforms that

the software will have and how the data will flow through different transforms. So, drawing a

DFD for design is a very creative activity in which the designer visualizes the eventual system and

its processes and data flows. As the system does not yet exist, the designer has complete freedom

in creating a DFD that will solve the problem stated in the SRS. The general rules of drawing a

DFD remain the same; we show what transforms are needed in the software and are not concerned

with the logic for implementing them. Consider the example of the simple automated teller

machine that allows customers to withdraw money.

An example, consider the problem of determining the number of different words in an input file.

The data flow diagram for this problem is shown in Figure 5.6.

This problem has only one input data stream, the input file, while the desired output is the count of

different words in the file. To transform the input to the desired output, the first thing we do is

form a list of all the words in the file. It is best to then sort the list, as this will make identifying

different words easier. This sorted list is, then, used to count the number of different words, and

the output of this transform is the desired count, which is then printed. This sequence of data

transformation is what we have in the data flow diagram.

Page 18: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

FIGURE 5.6: DFD FOR THE WORD-COUNTING PROBLEM

Identify the Most Abstract Input and Output Data Elements

Most systems have some basic transformations that perform the required operations. However, in

most cases, the transformation cannot be easily applied to the actual physical input and produce

the desired physical output. Instead, the input is first converted into a form on which the

transformation can be applied with ease. Similarly, the main transformation modules, often,

produce outputs that have to be converted into the desired physical output. The goal of this second

step is to separate the transforms in the data flow diagram that convert the input or output to the

desired format from the ones that perform the actual transformations.

Consider the data flow diagram shown in Figure 5.6. The arcs in the data flow diagram are the

most abstract input and most abstract output. The choice of the most abstract input is obvious. We

start following the input. First, the input file is converted into a word list, which is essentially the

input in a different form. The sorted word list is still basically the input, as it is still the same list,

in a different order. This appears to be the most abstract input because the next data (i.e., count) is

not just another form of the input data. The choice of the most abstract output is even more

obvious; count is the natural choice (a data that is a form of input will not usually be a candidate

for the most abstract output). Thus we have one central transform, count-the-number-of-different-

words, which has one input and one output data item.

Page 19: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

First-Level Factoring

Having identified the central transforms and the most abstract input and output data items, we are

ready to identify some modules for the system. We first specify a main module, whose purpose is

to invoke the subordinates. The main module is, therefore, a coordinate module. For each of the

most abstract input data items, an immediate subordinate module to the main module is specified.

Each of these modules is an input module, whose purpose is to deliver to the main module the

most abstract data item for which it is created.

Similarly, for each most abstract output data item, a subordinate module that is an output module

accepts data from the main module is specified. Each of the arrows connecting these input and

output subordinate modules is labeled with the respective abstract data item flowing in the proper

direction.

FIGURE 5.7: FIRST LEVEL FACTORING

Note that here a module is created for a transform, while input/output modules are created for data

item: The structure after the first-level factoring of the word-counting problem (its data flow

diagram was given earlier) is shown in Figure 5.7.

Factoring the Input, Output, and Transform Branches

The first-level factoring results in a very high-level structure, where each subordinate module has

a lot of processing to do. To simplify these modules, they must be factored into subordinate

Page 20: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

modules that will distribute the work of a module. Each of the input, output, and transformation

modules must be considered for factoring. Let us start with the input modules.

The purpose of an input module, as viewed by the main program, is to produce some data. To

factor an input module, the transform in the data flow diagram that produced the data item is now

treated as a central transform. The process performed for the first-level factoring is repeated here

with this new central transform, with the input module being considered the main module. A

subordinate input module is created for each input data stream coming into this new central

transform, and a subordinate transform module is created for the new central transform. The new

input modules now created can then be factored again, until the physical inputs are reached.

Factoring of input modules will, usually, not yield any output subordinate modules.

FIGURE 5.9: FACTORING THE INPUT MODULE

The factoring of the input module get-sorted-list in the first-level structure is shown in Figure 5.9.

The transform producing the input returned by this module (i.e., the sort transform) is treated as a

central transform. Its input is the word list. Thus, in the first factoring, we have an input module to

get the list and a transform module to sort the list. The input module can be factored further, as the

module needs to perform two functions, getting a word and then adding it to the list. Note that the

looping arrow is used to show the iteration.

Object-oriented designWhen using an object-oriented approach, an object is the basic design unit. That is, the goal of

design is to partition the system into objects rather than partition it into functions. For objects,

during design, the class for the objects is identified. A class represents the type for the object and

Page 21: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

defines the possible state space for the objects of that class and the operations that can be

performed on the objects. An object is an instance of a class and has state, behavior, and identity.

Any Object Oriented design has following major properties.

Encapsulation

ENCAPSULATION is the mechanisms that binds together code and the data it manipulates, and

keeps both safe outside interference and misuse. It is a protective wrapper that prevents the code

and data from being arbitrarily accessed by other code defined outside the wrapper.

Encapsulation is the capability to represent, denote and handle information at a higher level that is

inherent to a computer or base language. Variables and methods are formerly known as instance

variables and instance methods to distinguish from class variables and class methods.

Inheritance

Inheritance is the process by which one object acquires the properties of another object.

Classes inherit state and behavior from their superclass. A class is a blueprint or prototype that

defines the variables and methods common to all objects of a certain kind.

Object oriented systems allow classes to be defined in terms of other classes. For example,

mountain bikes, racing bikes and tandems are all subclasses of the bicycle class. Similarly, the

bicycle class is the superclass of mountain bikes, racing bikes and tandems.

Each subclass inherits state (in the form of variable declarations) from the superclass. Mountain

bikes, racing bikes and tandems share some states: Cadence, speed, and the like. Also each

subclass inherits methods from the superclass.

Abstraction

Abstraction, is this process of categorising data. Consider that a person is looking for a frame in an

optician's shop. To be able to choose a frame from amongst the various types of frames available,

he has to first identify the attributes he is looking far. Once he has identified the attributes, he has

with him a category or class of frames. Similarly, to model any real life objects in OOPS an

"object" has to be instantiated from a specific "class". This basic process of forming a class is

known as "Abstraction".

Page 22: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

Objects in a system do not exist in isolation but are related to each other. One of the goals of

design is to identify the relationship between the objects. This relationship is generally either an

association or an aggregation.

An Association is a general relationship between objects of two classes, implying that objects of a

class invoke operations on the objects of the other class.

Aggregation captures the “whole-part of” relationship, implying that the part objects belong to the

container object.

A key property of object-oriented systems is inheritance that allows the definition of a class in

terms of some existing classes. The new class inherits features from some existing classes, thereby

promoting reuse. The class that inherits is called the subclass or the derived class, while the parent

class is called the superclass or the base class. With inheritance, all the features of a base class can

be made accessible to the derived class. The derived class can add features or redefine them,

thereby creating a specialized class. The inheritance relationship results in creation of class

hierarchies, use of which is fundamental to object-oriented approaches. Generally, single

inheritance suffices where a derived class has exactly one base class. Multiple inheritance is the

situation where a class may derive features from various base classes. This form of inheritance

brings in some new issues.

Hence, an object-oriented design is one where the final system is represented in terms of object

classes, relationships between objects, and relationships between classes. To ensure that the design

is modular, some general properties should be satisfied. The main desirable properties are

information hiding, low coupling, and high cohesion, where coupling and cohesion are defined in

terms of classes. Reuse through inheritance is another important issue, although how to use it is

not straightforward, as reuse frequently conflicts with the desire for low coupling.

Although the basic concepts and principles to use for design and the basic means to evaluate the

quality of a design are sufficient to ensure that proper designs are obtained, some help is also

needed in how to create a design. Design methodologies help here by providing some guidelines

of how to create a design.

The object modeling technique (OMT) for design focuses on modeling, with this point of view the

design methodology for producing an Object Oriented design consists of the following sequence

of steps:

Page 23: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

Produce the object model.

Produce the dynamic model and use it to define operations on class.

Produce the functional model and use it to define operations on class.

Define algorithms for implementing major operations and identify internal classes and

operations from them.

Optimize the package.

However, the boundary between analysis and design is generally not clear in object-oriented

approaches. We consider that modeling is the starting step for design and have thus represented

OMT as a design technique.

Information Hiding

Information hiding is a fundamental design concept for software. The principle of information

hiding was formulated by Parnas. When a software system is designed using the information

hiding approach, each module in the system hides the internal details of its processing activities

and, modules communicate only through well-defined interfaces.

According to Parnas, design should begin with a list of difficult design decisions, and design

decisions that are likely to change. Each module is designed to transcend execution time design

modules may not correspond to processing steps in the implementation of the system.

In addition to hiding of difficult and changeable design decisions, other candidates for information

hiding include:

1. A data structure, its internal linkage, and the implementation details of the procedures that

manipulate it (this is the principle of data abstraction).

2. The format of control blocks such as those for queues in an operating system (a “control-

block” module).

3. Character codes, ordering of character sets, and other implementation details.

5. Shifting, masking, and other machine dependent details.

Information hiding can be used as the principal design technique for architectural design of a

system, or as a modularization criterion in conjunction with other design techniques.

Page 24: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

Design Verification

The output of Design phase should be verified before proceeding with the activity of next phase.

The most common approach for verification is design review. The goal of Design Review is to

ensure that design satisfies the requirements and is of “good quality”. If errors are made during the

design process they will ultimately reflect and amplified in next phases. A verification activity is

to assess and improve the quality of the work products generated during development and

modification of software. Quality attributes of interest include correctness, completeness,

consistency, reliability, usefulness, usability, efficiency, conformance to standards, and overall

cost effectiveness.

It has been found that the types of errors detected by various categories of verification techniques

are different.

As the cost of removing faults caused by errors that occur during design increases with the delay

in detecting the errors, it is best if design errors are detected early, before they manifest themselves

in the system. Detecting errors in design is the aim of design reviews.

In system design review process a group of people get together to discuss the design with the aim

of revealing design errors. The review group must include a member of both the system design

team and the detailed design team, the author of the requirements document, the author

responsible for maintaining the design document, and an independent software quality engineer.

Each member studies the design before the meeting, and with the aid of a checklist, marks items

that the reviewer feels are incorrect. The member asks questions and the chief designer tries to

explain the situation. Discussion ensues, and during its course design errors are revealed.

As with any review, it should be kept in mind that the aim of meeting is to uncover design errors

not to try to fix them. Also, the psychological frame of mind should be healthy, and the designer

should not be put in a defensive position. The meeting ends with a list of action items, which are

later acted on by the design team.

The number of ways in which errors can come in a design is limited only by the creativity of the

designer. The most significant design error is omission or misinterpretation of specified

requirements. There are some other quality factors that are not strictly design errors but that have

implications on the reliability and maintainability of the system. An example of this is weak

modularity (that is, weak cohesion and/or strong coupling). During reviews, elements of design

that are not conducive to design standards should also be considered “errors”.

Page 25: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

A Sample Checklist: The use of checklists can be extremely useful for any review. The checklist

can be used by each member during private study of the design and during the review meeting.

Here we list a few general items that can be used to construct a checklist for a design review:

Is each of the functional requirements taken into account?

Are there analyses to demonstrate that performance requirements can be met?

Are all assumptions explicitly stated, and are they acceptable?

Are there any limitations or constraints on the design beyond those in the requirements?

Are external specifications of each module completely specified?

Have exceptional conditions been handled?

Are all the data formats consistent with the requirements?

Are the operator and user interfaces properly addressed?

Is the design modular, and does it conform to local standards?

Are the sizes of data structures estimated? Are provisions made to guard against overflow?

Monitoring and Control

The review process was started with the purpose of detecting defects in the code. Though design

reviews substantially reduce defects in code, reviews are still very useful and can considerably

enhance reliability and reduce effort during testing. Code reviews are designed to detect defects

that originate during the coding process, although they can also detect defects in detailed design.

However, it is unlikely that code reviews will reveal errors in system design or requirements.

Code inspections or reviews are, usually, held after code has been successfully completed and

other forms of static tools have been applied but before any testing has been performed. Therefore,

activities like code reading, symbolic execution, and static analysis should be performed, and

defects found by these techniques corrected before code reviews are held. The main motivation for

this is to save human time and effort, which would otherwise be spent detecting errors that a

compiler or static analyzer can detect. In other words, the entry criteria for code review are that the

code must compile successfully and has been "passed" by other static analysis tools.

The documentation to be distributed to the review team members includes the code to be reviewed

and the design document. The review team for code reviews should include the programmer, the

Page 26: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

designer, and the tester. The review starts with the preparation for the review and ends with a list

of action items.

The aim of reviews is to detect defects in code. One obvious coding defect is that the code fails to

implement the design. This can occur in many ways. The function implemented by a module may

be different from the function actually defined in the design or the interface of the modules may

not be the same as the interface specified in the design. In addition, the input-output format

assumed by a module may be inconsistent with the format specified in the design.

Other code defects can be divided into two broad categories: logic and control and data operations

and computations. Some examples of logic and control defects are infinite loops, unreachable

code, incorrect predicate, missing or unreferenced labels, and improper nesting of loops and

branches. Examples of defects in computation and data operations are missing validity tests for

external data, incorrect access of array components, improper initialization, and misuse of

variables.

A Sample Checklist: The following are some of the items that can be included in a checklist for

code reviews.

Do data definitions exploit the typing capabilities of the language?

Do all the pointers point to some object? (Are there any "dangling pointers"?)

Are the pointers set to NULL, where needed?

Are the array indexes within bound?

Are indexes properly initialized?

Are all the branch conditions correct (not too weak, not too strong)?

Will a loop always terminate (no infinite loops)?

Is the loop termination condition correct?

Is the number of loop executions "off by one"?

Where applicable, are the divisors tested for zero?

Page 27: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

Coding

The main activity of the coding phase is to translate design into code. The coding phase affects

both testing and maintenance, profoundly. The time spent in coding is a small percentage of the

total software cost, while testing and maintenance consume the major percentage. Thus, it should

be clear that the goal during coding should not be to reduce the implementation cost but the goal

should be to reduce the cost of later phases, even if it means that the cost of this phase has to

increase. In other words, the goal during this phase is not to simplify the job of the programmer.

Rather, the goal should be to simplify the job of the tester and the maintainer.

This distinction is important, as most programmers are individualistic, and mostly concerned about

how to finish their job quickly, without keeping the later phases in mind. During implementation,

it should be kept in mind that the programs should not be constructed so that they are easy to

write, but so that they are easy to read and understand. A program is read, a lot more often, and, by

a lot more people, during the later phases. Often, making a program more readable will require

extra work by the programmers. For example, sometimes there are "quick fixes" to modify a given

code easily, which result in a code that is more difficult to understand. In such cases, in the interest

of simplifying the later phases, the easy "quick fixes" should not be adopted.

There are many different criteria for judging a program, including readability, size of the program,

execution time, and required memory. Having readability and understandability as a clear

objective of the coding activity can itself help in producing software that is more maintainable.

Programming Practice

The primary goal of the coding phase is to translate the given design into source code, in a given

programming language, so that code is simple, easy to test, and easy to understand and modify.

Simplicity and clarity are the properties that a programmer should strive for.

Good programming is a skill that can only be acquired by practice. However, much can be learned

from the experience of others, and some general rules and guidelines can be laid for the

programmer. Good programming (producing correct and simple programs) is a practice

independent of the target programming language, although some well-structured languages like

Pascal, Ada, and Modula make the programmer's job simpler. In this section, we will discuss some

concepts related to coding in a language-independent manner.

Page 28: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

Top-Down and Bottom-Up

All designs contain hierarchies, as creating a hierarchy is a natural way to manage complexity.

Most design methodologies for software also produce hierarchies. The hierarchy may be of

functional modules, as is the case with the structured design methodology where the hierarchy of

modules is represented by the structure chart. Or, the hierarchy may be an object hierarchy as is

produced by object-oriented design methods and, frequently, represented by object diagrams. The

question at coding time is: given the hierarchy of modules produced by design, in what order

should the modules be built-starting from the top level or starting from the bottom level?

In a top-down implementation, the implementation starts from the top of the hierarchy and

proceeds to the lower levels. First, the main module is implemented, then its subordinates are

implemented, and their subordinates, and so on. In a bottom-up implementation, the process is the

reverse. The development starts with implementing the modules at the bottom of the hierarchy and

proceeds through the higher levels until it reaches the top.

Top-down and bottom-up implementation should not be confused with top-down and bottom-up

design. Here, the design is being implemented, and if the design is fairly detailed and complete, its

implementation can proceed in either the top-down or the bottom-up manner, even if the design

was produced in a top-down manner. Which of the two is used, mostly affects testing.

If there is a complete design, why is the order in which the modules are built, an issue? The main

reason is that we want to incrementally build the system. That is, we want to build the system in

parts, even though the design of the entire system has been done. This is necessitated by the fact

that for large systems it is simply not feasible or desirable to build the whole system and then test

it. All large systems must be built by assembling validated pieces together. The case with software

systems is the same. Parts of the system have to be first built and tested, before putting them

together to form the system. Because parts have to be built and tested separately, the issue of top-

down versus bottom-up arises.

The real issue in which order the modules are coded comes in testing. If all the modules are to be

developed and then put together to form a system for testing purposes, as is done for small

systems, it is immaterial which module is coded first. However, when modules have to be tested

separately, top-down and bottom-up lead to top-down and bottom-up approaches to testing. And

these two approaches have different consequences. Essentially, when we proceed top-down, for

testing a set of modules at the top of the hierarchy, stubs will have to be written for the lower-

Page 29: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

level modules that the set of modules under testing invoke. On the other hand, when we proceed

bottom-up, all modules that are lower in the hierarchy have been developed and driver modules

are needed to invoke these modules under testing.

Top-down versus bottom-up is also a pertinent issue when the design is not detailed enough. In

such cases, some of the design decisions have to be made during development. This may be true,

for example, when building a prototype. In such cases, top-down development may be preferable

to aid the design while the implementation is progressing. On the other hand, many complex

systems, like operating systems or networking software systems, are naturally organized as layers.

In a layered architecture, a layer provides some services to the layers above, which use these

services to implement the services it provides. For a layered architecture, it is generally best for

the implementation to proceed in a bottom-up manner.

Summary

The design process is a sequence of steps that enables the designer to describe all aspects of

the software to be built.

Design principle aim at finding the best possible design within the limitations imposed by the

requirements and the physical and social environment in which the system will operate.

A system is considered modular if it consists of discreet components so that each component

can be implemented separately, and a change to one component has minimal impact on other

components.

For easily understandable and maintainable systems, modularity is clearly the basic objective;

partitioning and abstraction can be viewed as concepts that help achieve modularity.

Verification is primarily intended for detecting errors introduced during coding phase. That is,

the goal of verification of the code produced is to show that the code is consistent with the

design it is supposed to implement.

Monitoring and control is an essential post design action designed to detect defects that

originate during the coding process, although they can also detect defects in detailed design.

Page 30: Designckverma.com/Book/Design.doc · Web viewSince solution to complex problems isn’t usually found in the first try, iterations are most likely required. This is true for software

Detailed Questions

1. Describe design principles.

2. Describe modularity. Give the critical views with respect to coupling and cohesion.

3. Describe all design tools and techniques.

5. Discuss the advantages of structured programming.

5. Describe PDL with respect to matrix multiplication.

6. Differentiate between coupling and cohesion? Which one is more important during design

and why? Justify your answer with an example.

7. List and describe various types of cohesion with one example of each.

8. Differentiate between function-oriented design and object-oriented design. Which one is

more useful for handling a large complex problem and why? Justify through an example.