splashpage apr.htmlcopyright rational software 2001€¦ · software development organizations from...

104

Upload: others

Post on 12-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software
jprince
http://www.therationaledge.com/splashpage_apr.html
jprince
Copyright Rational Software 2001
Page 2: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Editor's Notes

T.S. Eliot had specific reasons for believing that "April is the cruelest month," but if your career is in software development, you're likely to be so busy that one month feels as demanding as the next. Back in February, Rational co-founders Paul Levy and Mike Devlin made it clear that enormous challenges (as well as opportunities) face software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software and the New Business Economy," they presented the main challenge as a dilemma: Do you deliver high quality software, or do you get your product to market faster than your competition?

Historically, most software providers have made a tradeoff between time-to-market and quality -- and eventually pay a high price for not adequately resolving this dilemma. But Paul and Mike promised a detailed look at the solution. And this month, one of Rational's thought leaders delivers on that promise. In the first of a four part series, Walker Royce examines the relationship of software engineering practices to cost-containment since the 1960s, and introduces four broad recommendations for improving the economics of software development.

As Walker says in his introduction, one of Rational's primary goals is "...to apply what we have learned in order to enable software development organizations to make substantial improvements in their software project economics and organizational capabilities." Look for further recommendations based on Rational's years of software development experience in the May, June, and July issues of The Rational Edge.

In this issue, you'll also find great advice on an array of other technical and management concerns, including techniques for building better use cases, a process for selecting testing tools, issues to consider rebusiness processes and automation, how to deploy Web code and content without driving your Web team bonkers, and much more.

Welcome to the first issue of Spring 2001!

Mike Perrow

jprince
http://www.therationaledge.com/content/apr_01/
jprince
Copyright Rational Software 2001
Page 3: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Editor-in-Chief

Reader Mail

This month's reader mail features an extended question and a series of answers from Rational technical experts regarding the link between Rational Rose and Rational RequisitePro.

Latest News

New Rational Test RealTime: The Industry's First Complete Solution for Testing Embedded Systems and more.

Copyright Rational Software 2001 | Privacy/Legal Information

Page 4: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Improving Software Development Economics Part I: Current Trends

by Walker RoyceVice President and General ManagerStrategic ServicesRational Software

Over the past two decades, the software industry has moved unrelentingly toward new methods for managing the ever-increasing complexity of software projects. We have seen evolutions, revolutions, and recurring themes of success and failure. While software technologies, processes, and methods have advanced rapidly, software engineering remains a people-intensive process. Consequently, techniques for managing people, technology, resources, and risks have profound leverage.

For more than twenty years, Rational has been working with the world's largest software development organizations across the entire spectrum of software domains. Today, we employ more than 1,000 software engineering professionals who work onsite with organizations that depend on software. We have harvested and synthesized many lessons from this in-the-trenches experience. We have diagnosed the symptoms of many successful and unsuccessful projects, identified root causes of recurring problems, and packaged patterns of software project success into a set of best practices captured in the Rational Unified Process. Rational is also a large-scale software development organization, with more than 750 software developers. We use our own techniques, technologies, tools, and processes internally, with outstanding results, as evidenced by our own business performance and product leadership in the market.

One of our primary goals is to apply what we have learned in order to enable software development organizations to make substantial improvements in their software project economics and organizational capabilities. This is the first in a series of four articles that summarize the key approaches that deliver these benefits.

jprince
http://www.therationaledge.com/content/apr_01/f_econ_wr.html
jprince
Copyright Rational Software 2001
Page 5: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

A Simplified Model of Software Economics

There are several software cost models in use today. The most popular, open, and well-documented model is the COnstructive COst MOdel (COCOMO), which has been widely used by industry for 20 years. The latest version, COCOMO II, is the result of a collaborative effort led by the University of Southern California (USC) Center for Software Engineering, with the financial and technical support of numerous industry affiliates. The objectives of this team are threefold:

1. To develop a software cost and schedule estimation model for the lifecycle practices of the post-2000 era

2. To develop a software project database and tool support for improvement of the cost model

3. To provide a quantitative analytic framework for evaluating software technologies and their economic impacts

The accuracy of COCOMO II allows its users to estimate cost within 30 percent of actuals, 74 percent of the time. This level of unpredictability in the outcome of a software development process should be truly frightening to any software project investor, especially in view of the fact that few projects ever miss their financial objectives by doing better than expected.

The COCOMO II cost model includes numerous parameters and techniques for estimating a wide variety of software development projects. For the purposes of this discussion, we will abstract COCOMO II into a function of four basic parameters:

● Complexity. The complexity of the software solution is typically quantified in terms of the size of human-generated components (the number of source instructions or the number of function points) needed to develop the features in a usable product.

● Process. This refers to the process used to produce the end product, and in particular its effectiveness in helping developers avoid non-value-adding activities.

● Team. This refers to the capabilities of the software engineering team, and particularly their experience with both the computer science issues and the application domain issues for the project at hand.

● Tools. This refers to the software tools a team uses for development, that is, the extent of process automation.

The relationships among these parameters in modeling the estimated effort can be expressed as follows:

Effort = (Team)*(Tools)*(Complexity) (Process)

Schedule estimates are computed directly from the effort estimate and

Page 6: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

process parameters. Reductions in effort generally result in reductions in schedule estimates. To simplify this discussion, we can assume that the "cost" includes both effort and time. The complete COCOMO II model includes several modes, numerous parameters, and several equations. This simplified model enables us to focus the discussion on the more discriminating dimensions of improvement.

What constitutes a good software cost estimate is a very tough question. In our experience, a good estimate can be defined as one that has the following attributes:

● It is conceived and supported by the project manager, the architecture team, the development team, and the test team accountable for performing the work.

● It is accepted by all stakeholders as ambitious but realizable.

● It is based on a well-defined software cost model with a credible basis and a database of relevant project experience that includes similar processes, similar technologies, similar environments, similar quality requirements, and similar people.

● It is defined in enough detail for both developers and managers to objectively assess the probability of success and to understand key risk areas.

Although several parametric models have been developed to estimate software costs, they can all be generally abstracted into the form given above. One very important aspect of software economics (as represented within today's software cost models) is that the relationship between effort and size (see the equation above) exhibits a diseconomy of scale. The software development diseconomy of scale is a result of the "process" exponent in the equation being greater than 1.0. In contrast to the economics for most manufacturing processes, the more software you build, the greater the cost per unit item. It is desirable, therefore, to reduce the size and complexity of a project whenever possible.

Trends in Software Economics

Software engineering is dominated by intellectual activities focused on solving problems with immense complexity and numerous unknowns in competing perspectives. In the early software approaches of the 1960s and 1970s, craftsmanship was the key factor for success; each project used a custom process and custom tools. In the 1980s and 1990s, the software industry matured and transitioned into more of an engineering discipline. However, most software projects in this era were still primarily research-intensive, dominated by human creativity and diseconomies of scale. Today, the next generation of software processes is driving toward a more production-intensive approach, dominated by automation and economies of scale. We can further characterize these three generations of software development as follows:

1. 1960s and 1970s: Conventional. Organizations used virtually all custom tools, custom processes, and custom components built in

Page 7: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

primitive languages. Project performance was highly predictable: Cost, schedule, and quality objectives were almost never met.

2. 1980s and 1990s: Software engineering. Organizations used more repeatable processes, off-the-shelf tools, and about 70 percent of their components were built in higher level languages. About 30 percent of these components were available as commercial products, including the operating system, database management system, networking, and graphical user interface. During the 1980s, some organizations began achieving economies of scale, but with the growth in applications complexity (primarily in the move to distributed systems), the existing languages, techniques, and technologies were just not enough.

3. 2000 and later: Next generation. Modern practice is rooted in the use of managed and measured processes, integrated automation environments, and mostly (70 percent) off-the-shelf components. Typically, only about 30 percent of components need to be custom built.

Figure 1 illustrates the economics associated with these three generations of software development. The ordinate of the graph refers to software unit costs (per Source Line of Code [SLOC], per function point, per component -- take your pick) realized by an organization. The abscissa represents the life-cycle growth in the complexity of software applications developed by the organization.

Technologies for achieving reductions in complexity/size, process improvements, improvements in team proficiency, and tool automation are not independent of one another. In each new generation, the key is complementary growth in all technologies. For example, the process advances could not be used successfully without new component technologies and increased tool automation.

Page 8: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Figure 1: Trends in Software Economics

Keys to Improvement: A Balanced Approach

Improvements in the economics of software development have been not only difficult to achieve, but also difficult to measure and substantiate. In software textbooks, trade journals, and market literature, the topic of software economics is plagued by inconsistent jargon, inconsistent units of measure, disagreement among experts, and unending hyperbole. If we examine only one aspect of improving software economics, we are able to draw only narrow conclusions. Likewise, if an organization focuses on improving only one aspect of its software development process, it will not realize any significant economic improvement -- even though it may make spectacular improvements in this single aspect of the process.

The key to substantial improvement in business performance is a balanced attack across the four basic parameters of the simplified software cost model: Complexity, Process, Team, and Tools. These parameters are in priority order for most software domains. In Rational's experience, the

Page 9: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

following discriminating approaches have made a difference in improving the economics of software development and integration:

1. Reduce the size or complexity of what needs to be developed.

● Manage scope.

● Reduce the amount of human-generated code through component-based technology.

● Raise the level of abstraction, and use visual modeling to manage complexity.

2. Improve the development process.

● Reduce scrap and rework by transitioning from a waterfall process to a modern, iterative development process.

● Attack significant risks early through an architecture-first focus.

● Use software best practices.

3. Create more proficient teams.

● Improve individual skills.

● Improve project teamwork.

● Improve organizational capability.

4. Use integrated tools that exploit more automation.

● Improve human productivity through advanced levels of automation.

● Eliminate sources of human error.

● Enable process improvements.

Most software experts would also stress the significant dependencies among these trends. For example, new tools enable complexity reduction and process improvements; size-reduction approaches lead to process changes; and process improvements drive tool advances.

In the subsequent issues of The Rational Edge, we will elaborate on the approaches listed above for achieving improvements in each of the four dimensions. These approaches represent patterns of success we have observed among Rational's most successful customers who have made quantum-leaps in improving the economics of their software development efforts.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Page 10: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Copyright Rational Software 2001 | Privacy/Legal Information

Page 11: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

RUP and XPPart II: Valuing Differences

by Gary PolliceEvangelist, The Rational Unified ProcessRational Software

In the last issue of The Rational Edge, we looked at the common ground between the Rational Unified Process (RUP®) and eXtreme Programming (XP). They certainly have a lot in common. This month, in Part Two of our comparison, we look at the last three XP practices and at some areas of RUP not covered by XP.

There are three XP practices we deferred discussing until this issue. They are:

● Small releases

● Collective code ownership

● Metaphor

We will discuss each of these. But first, I'd like to point out that the set of XP practices we are talking about is the original twelve practices set forth by Kent Beck in his book Extreme Programming Explained: Embrace Change, published by Addison-Wesley in 1999. As of March 15, 2001, there were several additional supporting practices listed on the Extreme Programming Roadmap page: http://c2.com/cgi/wiki?ExtremeProgrammingRoadmap. This indicates the dynamic and somewhat experimental nature of XP, which is not necessarily a bad thing. Any process needs to be dynamic and keep up-to-date with proven best practices. At the end of this article we will also look at some ways RUP and XP can work together to provide a good experience for software development project members.

Small Releases: How Small and Released to Whom?

jprince
http://www.therationaledge.com/content/apr_01/f_xp2_gp.html
jprince
Copyright Rational Software 2001
Page 12: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

What is a release? Depending upon how you answer this question, RUP and XP can seem quite similar in their concepts of a release. RUP defines a release as: "...a stable, executable version of product, together with any artifacts necessary to use this release, such as release notes or installation instructions."1 Furthermore, according to RUP, releases are either internal or external. By this definition, a "release" creates a forcing function that ensures a shippable product, rather than a system that is only 80 percent complete. XP defines a release as "...a pile of stories that together make business sense."2 In much of the discussion about small releases on some XP Web pages, the practice of small releases seems to coincide with the practice of continuous integration.3 If you interpret the stories to mean the code as well as any artifacts necessary to use the release, and you accept the release as either internal or external, then the RUP and the XP concepts of a release are almost identical.

RUP invites you to consider more than just code. A release, especially an external one to the customer, may prove useless unless accompanied by release notes, documentation, and training. XP addresses code and assumes the rest will appear. Since code is the primary artifact of XP, the others need to be derived from it. This implies certain skills that may not be obvious. For example, technical writers might need to be able to read the code to understand how the system works in order to produce the documentation.

I have talked with several people who assume the frequent releases in XP are all to be delivered to an external customer. In fact, XP is not clear about this. In Extreme Programming Installed, the authors urge you to get the code into the customer's hands as frequently as possible.4 The fact is, in many organizations customers cannot accept frequent software updates. You need to weigh the benefits of frequent delivery against the impact on the customer's ability to be productive. When you are unable to deliver a system to the customer, you should consider other ways of getting feedback, such as usability testing. On a RUP-based project, you typically deliver to the customer in the last construction iteration as well as in the transition phase iterations.

Collective Code Ownership: Yours, Mine, and Ours

XP promotes "collective code ownership," which means that when you encounter code that needs to be changed, you change it. Everyone has permission to make changes to any part of the code. Not only do you have permission to make the changes -- you have the responsibility to make them.

There is an obvious benefit with this practice. When you find code that needs to be changed, you can change it and get on with your work without having to wait for someone else to change it. In order for this practice to work, however, you need to also practice "continuous integration" and maintain an extensive set of tests. If you change any code, then you need to run the tests and not check in your code changes until all tests pass.

But will collective ownership work everywhere? Probably not. Large systems contain too much content for a single person to understand it all

Page 13: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

at a detailed level. Some small systems often include code that is complex due to its domain or the function it performs. If a specialist is required, then collective ownership may not be appropriate. When a system is developed in a distributed environment, it is not possible for everyone to modify the code. In these cases, XP offers a supporting practice called "code stewardship."5 With code stewardship, one person, the steward, has responsibility for the code, with input from other team members. There are no guidelines when to apply code stewardship instead of collective code ownership.

Collective code ownership provides a way for a team to change code rapidly when it needs changing. Are there any potential drawbacks to this? If you consider all the ways code is changed, then there are some things you may want to control less democratically, in a centralized way -- for example, when code is modified because it lacks some functionality. If a programmer is implementing a story (or a use case, or a scenario), and requires behavior from a class, collective code ownership allows the class to be modified on the spot. As long as the system is small enough for a programmer to understand all of the code, this should work fine. But as the system gets larger, it is possible that the same functionality might be added to code that exists somewhere else. This redundancy might be caught at some point and the code refactored, but it is certainly possible for it to go unnoticed and for the functionality to begin diverging.

You may want to start a project using collective code ownership to allow your team to move quickly. As long as you have good code management tools and effective testing, then it will work for you -- up to a point. As a project leader or manager, however, you need to be on the lookout for the point when the code base becomes too large or too specialized in places. When this happens, you may want to structure your system into an appropriate set of components and subsystems and ensure that specific team members are responsible for them. RUP provides guidance and other help on how to structure your system.

System Metaphor: It's Like Architecture

A metaphor is a figure of speech that allows us to make comparisons. It is one way that we learn: "A motorcycle is like a bicycle, but it has a motor attached." XP uses a system metaphor instead of RUP's formal architecture. This system metaphor is "...a simple shared story of how the system works. This story typically involves a handful of classes and patterns that shape the core flow of the system being built."6> Based on comparisons with familiar things, patterns help us understand new and unfamiliar things.

And indeed, the XP system metaphor may be a suitable replacement for architecture in some cases, but usually only in small systems. For many, if not most, software systems, you need more than a simple shared story. How much more you need depends upon many factors.

By contrast, RUP is an architecture-centric process.7 Architecture is more than a metaphor, although it may include several metaphors. Architecture is concerned with structure, behavior, context, usage, functionality,

Page 14: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

performance, resilience, reuse, comprehensibility, constraints, trade-offs, and aesthetics. It is usually not possible to capture all of this in a simple shared story. Architecture does not provide a complete representation of the whole system. It concentrates on what is architecturally significant and important in reducing risks.

RUP provides a wealth of guidance on constructing and managing architecture. It helps the practitioner construct different views of the architecture for different purposes. 8 The different views are needed because there are different aspects that need to be highlighted and different people who need to view the architecture.

A RUP-based project will address architecture early. Often an executable architecture is produced during the Elaboration Phase. This provides an opportunity to evaluate solutions to critical technical risks, and the architecture is built upon during subsequent construction iterations.

An executable architecture is a partial implementation of the system, built to demonstrate selected system functions and properties, in particular those satisfying non-functional requirements. It is built during the elaboration phase to mitigate risks related to performance, throughput, capacity, reliability and other "ilities", so that the complete functional capability of the system may be added in the construction phase on a solid foundation, without fear of breakage. It is the intention of the Rational Unified Process that the executable architecture be built as an evolutionary prototype, with the intention of retaining what is found to work (and satisfies requirements), and making it part of the deliverable system.7

What's Not Covered by XP That Is in RUP?

Your project may be able to use XP for developing the code. If not, then you may need to add some additional process, but just enough to reduce your risks and ensure that you are able to deliver the right product to your customers on time.

However, when you look at a development project as a complete set of deliverables, code, documentation, training, and support, there are many things RUP addresses that are not considered in XP. Again, you need to determine whether they are needed for your specific project. The following list provides things you may need to consider. The list is not exhaustive. You can find additional information about these items in the Rational Unified Process.

● Business modeling. The whole subject of business modeling is absent from XP. Systems are deployed into an organization. Knowledge of the organization can be important when identifying the requirements and for understanding how well a solution might be accepted.

● Project inception. XP assumes the project has been justified and does not address how that justification takes place. In many organizations, a business case must be made before a project begins in earnest. RUP helps a team make its business case by

Page 15: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

developing stakeholders' needs and a vision.

● Deployment. The whole area of system deployment is missing from XP. Any system needs supporting materials, minimally online documentation. Most need more. Commercial software products require packaging, distribution, user manuals, training materials, and a support organization. The RUP Deployment discipline provides guidance for practitioners on how to create appropriate materials and then use them.

Mix and Match for Best Results

Process diversity is important.9 One size does not fit all projects. The process you use for your project should be appropriate for it. Consider what your project needs and adopt the right approach. Consider all aspects and risks. Use as much as you need, but neither too little nor too much.

RUP and XP provide two different approaches to software development projects. They complement each other in several ways. XP concentrates on code and techniques for a small team to create that code. It stresses face-to-face communication and places minimal effort on non-code artifacts. RUP is a process framework that you configure for different types of projects. It invites you to consider risks and risk mitigation techniques. RUP is often misinterpreted as being heavy because, as a framework, it provides process information for many types of projects. In fact, a configured instance of RUP may be very light, depending upon the risks that need to be addressed. It may incorporate some of the excellent techniques of XP and other processes if they are appropriate for the project at hand.

If my project is only about creating code, I may use just XP. However, almost all of the projects I work on require initial business decisions and planning, complete documentation, support, and deployment to customers. For this reason, I would more likely start with RUP and use the

appropriate XP practices that will help my team move ahead quickly and mitigate real risks the project faces.

As a software engineer, I try to have a well-stocked toolbox of techniques, processes, and tools that help me succeed. I'm glad to have both RUP and XP as part of my collection. More techniques in my toolbox means that I can provide better value to my project and my organization. In addition, as a project manager or process engineer, I can create an instance of a process for a project that addresses the organization's need for controls while providing individual project members with an environment that can be fun and satisfying.

Page 16: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

1 Rational Unified Process Glossary.

2 Kent Beck, Extreme Programming Explained: Embrace Change. Reading, MA: Addison-Wesley 1999, p.178.

3 http://c2.com/cgi/wiki?FrequentReleases.

4 Ron Jeffries et al, Extreme Programming Installed. Reading, MA: Addison-Wesley, 2000, p.50.

5 http://c2.com/cgi/wiki?CodeStewardship.

6 http://c2.com/cgi/wiki?SystemMetaphor.

7 This is described in Philippe Kruchten, "The 4+1 View of Architecture." IEEE Software, November, 1995.

8 This definition is taken from the Rational Unified Process glossary.

9 For more information on process diversity, see Mikael Lindvall and Iona Rus, "Process Diversity in Software Development." IEEE Software, July/August 2000.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

Page 17: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Tighten Business Processes Before You Automate

by John WardleySoftware Industry Consultant

Over the last ten years we have seen unprecedented growth in the economy. Faster computers, better software, and accessible, transaction-friendly Web sites helped fuel this growth and allowed many companies to expand at a dizzying pace. This rapid growth has had a profound effect on business processes in both large and small organizations.

In his article "Introduction to Business Modeling Using the Unified Modeling Language," which appeared in last month's issue of The Rational Edge, author Jim Heumann observed that:

The role of software has changed. It is no longer about cool features for computer hobbyists. Instead, commercially driven software projects are becoming more business focused, and the emphasis has shifted from technical innovation to commercial added value. Software must be delivered rapidly in increments driven by business value rather than technical needs.

In this article, we will take a higher-level view of this demand for business value, examining the challenges IT departments face vis à vis business process modeling. As we will see, for many projects, the most significant challenge may be to discover an effective business process that is worth modeling at all. Too often, in both build and buy situations, companies don't fully analyze their existing (typically sub-optimal) business processes and the impact that new software solutions will have on them.

Technology: Solution or Problem?

True, technology in many ways has been the pillar of economic expansion over the past decade. During this time, I've worked with customers and partners on many challenging IT projects. Unfortunately, at times I've also

jprince
http://www.therationaledge.com/content/apr_01/f_auto_jw.html
jprince
Copyright Rational Software 2001
Page 18: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

been forced to stand on the sidelines and watch projects run amok, far exceeding their timelines and budget and, in some cases, never working as originally billed. We can all remember when, in the mid-1990s, SAP was the poster child for such projects. In response, the Big Six consulting firms quickly ramped up their SAP practices, and customers invested millions in service fees to help get these systems running. SAP projects, however, were not the only ones that bordered on disasters; most Enterprise Resource Planning projects, as well as other major enterprise software installations, had their share of trouble. In fact, findings published in a recent Standish Report suggest that the overall success rate for U.S. software projects is still only 26 percent!

Why are these numbers so low and the problems so pervasive? The main reason is that, too often, companies are attempting to automate business processes that are inherently flawed. Under these circumstances, any major IT project that spans multiple departments will only multiply the problems. And this is an "equal opportunity" phenomenon with respect to build or buy installations, and Application Service Provider-based solutions for that matter.

In addition, cross-departmental software always has an impact on the organization and its personnel structure as well as on business processes. But few companies anticipate the adjustments that must be made among business personnel, IT departments, and corporate infrastructure as the new automated systems become part of the business's routine. Although it may require more initial outlays, "build" is probably a better strategic choice than "buy" for many companies. Instead of trying to force-fit a purchased application, they can develop solutions in-house that fit their organization's specific structure and culture and react to problems as they arise.

A thorough discussion of this topic could consume at least one more article in The Rational Edge, however, so for now we will focus on our main point: If you do not take into account the existing business process and the impact new software solutions will have on a given department before implementation, then you are in for certain trouble. And no amount of post-implementation technical support or training will save you.

Software Systems Amplify the Impact of Business Models

Early software implementations designed to solve general business problems focused largely on discrete, departmental-level business functions: human resources, payroll, accounts payable and receivable, and so on. Their impact was quickly felt as they sped the processing of tasks that formerly took days and weeks to perform manually. Since these applications focused on automating a discrete function, they could easily be implemented without impacting the entire organization.

By contrast, today's enterprise class software suites comprise multiple functions and strive to automate multiple departments in a single pass. Needless to say, their level of complexity grows dramatically as they attempt to solve problems across different business organizations. This

Page 19: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

increase in scope and complexity is not simply a matter of supporting more end-users; large suites must also support each department's requirements, including business goals and existing business processes. Often, however, this is impossible for the system to achieve without a re-architecting of the entire company. In other words, instead of the existing business processes driving the organization, and hence the software solution, these enterprise class software suites drive the business processes and the overall organizational structure.

Naturally, this creates serious tension and often failure, especially if the company plunges ahead with the implementation without understanding what restructuring is required. And, as we continue to investigate this phenomenon, we can't always point the finger at the technology and scream, "There's the problem." The real problem is a mismatch between the business and the technology: Business models are not perfect, and IT installations cannot meet 100 percent of the expectations for them.

Good Model, Great Results

What the technology does inevitably do -- the software, hardware, and services -- is to act as an amplifier. Specifically, because technology speeds the function of repeatable business processes, it also amplifies the effects of those processes. With good business models, this amplification is a good thing. For example, to pay my monthly bills I used to write each check individually, record each transaction in my checkbook, assemble the bills in addressed envelopes, affix the stamps, and walk the envelopes down the street to my corner mailbox. The postal service took over from there. The total elapsed time from my check writing to my creditors' receiving and processing payment was seven to eight days. Although it was time consuming, this was an effective, relatively error-free process. Today, I essentially replicate this effective process online. I select all the creditors at once, enter the payment amounts, and click on the "Submit" button. The total elapsed time? Two days, max. That's a 400 percent increase in speed! The essential "business process" is the same: noting dollar amounts owed, paying them, and keeping track of the transaction. But with technology at my disposal, the process is faster, and life is easier.

At the most recent Oracle AppsWorld Conference in New Orleans, Oracle claimed that they have saved over two billion dollars from when they first implemented their own solutions. As a software and services vendor, they certainly saved money on the implementation. But the primary savings came in their ability to do business more efficiently. For example, Oracle claims that reduced customer calls alone have netted a $1.4 billion savings! And by consolidating and centralizing their worldwide datacenters, Oracle is reducing its global IT operating budget from $600 million to $400 million (the $200 million savings is expected to be realized by May 2001). The point is, dramatic rearchitecture and new technology investments make plenty of sense -- in this case, they are yielding tremendous savings -- when those changes are supporting sound business processes.

Garbage Out -- On a Grand Scale

Clearly, it is at the intersection of IT solutions and existing business

Page 20: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

processes that the rubber meets the road. Just as technology can amplify the beneficial effects of a good process, if you introduce technology to implement a bad business model, then the old adage of "Garbage in, Garbage out" applies. Fundamentally, if the customer's business processes don't jibe with the software being implemented, or even worse, the business processes don't work correctly on their own, then no amount of technology can help. In fact, if the technology is designed to speed the function of repeatable business processes, then applying it will merely speed up the course of failure for the faulty practices.

Unfortunately, the flaws in these processes often become apparent only when a company attempts to automate them. To the average new-system user, accustomed to doing end runs around the problems, it may seem that everything was fine prior to the installation, but after the installation, everything just fell apart! In truth, the source of trouble is that people who are forced to use a faulty automated process often behave the same way as people using a faulty manual process: They begin incorporating on-the-spot workarounds to compensate for the system's failings. And that is what creates chaos.

For example, my wife recently went to the Boston outlet of a national chain store to return a $149 computer carrying bag that she purchased while traveling. Unfortunately, the store did not carry the same bag, and the company does not have an enterprise-wide system for storing SKUs (stock Keeping Unit numbers), so neither she nor the store could find the correct number for it. When she called the store where she made her original purchase, they could not find the SKU either, but the manager offered another solution: "How about if I just give you another SKU that also happens to be for $149?" That was fine for her. She returned the bag in Boston and went on her way. For the company, on the other hand, this solution created two inventory errors: Their records now show one less computer bag than they actually have in stock and one more of that other $149 item than is really available. Down the line, the store manager's workaround solution will create problems for others using the system.

Problems such as these are not new. What's new is the amount of damage they can do. As we noted above, early software applications were implemented to solve discrete business functions, most of which did not overlay onto broad business processes. The success of applications for functions such as word processing and accounting innocently led to greater expectations about the power of IT automation. And today's complex, enterprise-wide suites of business applications have, indeed, extended this power: By implementing them you can either achieve remarkable efficiencies or wreak havoc on an entire organization in one fell swoop.

A New Approach

Even as companies struggle with the effects of automating imperfect business models, in today's competitive environment, vendors and customers alike are feeling the pressure to improve on that 26 percent success rate for software projects. Project managers are constantly pushed to both speed up implementations and ensure a predictable return

Page 21: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

on investment.

To address these pressures, software vendors are beginning to treat customers more like partners, and installations more like joint ventures with their customers. Some software companies offer services to help companies address existing business problems before installing software -- or even before selecting the technology. These services can be loosely defined as management consulting efforts, but they differ from classic management consulting services, which are often abstract and distant from a company's day-to-day needs. Instead, this vertically focused consulting aims at business process: It aligns specific consulting expertise with specific technology areas such as customer relationship management (CRM), sales force automation (SFA), or channel management, for example. This new approach migrates away from the notion of a CRM project, for example, as strictly an IT software installation, and toward the idea of a CRM project as a business strategy. In other words, this new breed of consulting services recognizes that these installations directly impact the company's bottom line, and that they are integral to both the tactical (short-term) and strategic (long-term) business operation.

Two Forward-Thinking Vendors

Two examples of software vendors that have embraced this new approach to business process analysis include the SFA and CRM giant Siebel, and the up-and-coming CRM competitor Onyx Software. Both companies do pre-installation best practice consulting to help guarantee successful implementations and happy, reference-able, customers.

In late 1999, Siebel acquired OnTarget, a leading provider of consulting services and training programs for sales and marketing organizations. In doing so, Siebel gained capabilities that directly mapped to their current software applications strategy. They could easily align skill sets from both organizations, and, with very little ramp-up time, begin leveraging the existing client bases of both companies for its new, expanded offerings. One of the main things OnTarget brought to Seibel was CHAMP (Channel and Alliances Management Process), a framework that helps companies define and refine their existing channel process. Siebel quickly married CHAMP to its software applications and now offers a complete package of services to its customers, from business modeling to application installation.

Onyx Software, a mid-market CRM applications software vendor based in Washington, also expanded its consulting services through a strategic acquisition. Late last year, Onyx purchased the boutique consulting firm RevenueLab, which, like OnTarget, approaches the software selection process from a purely business perspective. In RevenueLab's case, the focus is on "go-to-market strategies," or the key business drivers that impact the company every minute of every day, without regard to internal politics that may influence decisions. RevenueLab takes the customer's view of the company, adjusts the business model based on the corporation's goals, and then recommends the best CRM solution -- even, so they claim, if this CRM solution is from a vendor other than Onyx!

In both of these instances, companies that were primarily software

Page 22: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

vendors recognized a need and took necessary steps to offer services that bridge the gap between their customers' business environment (which formerly they did not know and could not control) and their products, which are designed to manage their customers' businesses. Only when you gain some control over your customer's business models can you ensure that your product will succeed within your customer's business environment.

Guidelines for Success

As business becomes more and more competitive and a decent return on investment becomes an increasingly important requirement, how can you ensure that new IT projects will be successful? Here are my suggestions for software customers and their vendors.

For Customers

● Make sure you're modeling the correct business process. Before you even begin to investigate which new technology to purchase, take time to analyze how you're doing business now and what needs to change. Talk with all relevant business constituencies and be sure their departments' functions are defined and documented correctly.

● Stay focused on potential impact. Managing large, internal steering committees can be challenging, and it's important to keep everyone informed and on track. But don't let these administrative responsibilities distract you from conducting a careful assessment of how new technology will affect not only your business processes, but also personnel and organizational structure. Also, make sure you have executive-level support early in the process.

● Get outside help with your purchase decision. Leverage as many data sources as necessary to make a good purchasing decision. Use software tools designed for comparing products, and consult industry analysts to help you arrive at a short list of products. Also check customer references from both applications providers and systems integrators.

● Keep pre-implementation consulting focused on the issues. Do your homework before you bring in the experts. The more you know about your organization, the better this process will work.

For Software Vendors

● Focus on immediate needs. It is easy to get off track and talk about bigger, better projects. From a company management perspective, consulting practices are certainly a revenue generator, but concentrate on setting expectations, laying the foundation for delivering the goods, and accelerating delivery times for existing deals. Don't try to leverage work sessions with your customer to expand the deal or make new ones. Happy customers are what drive more and bigger deals.

● Provide domain expertise. The consultants you send for these jobs

Page 23: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

should be practitioners of the trade. That is, they should have deep, real-world experience with the specific business segment and with the particular technology the company is implementing. Because they are consulting for end-users who live and breathe the problems they are addressing, these consultants must gain creditability instantly, or the process will not go smoothly.

Keeping these guidelines in mind will go a long way toward setting correct expectations, involving the right constituencies, and automating an appropriate and effective business process. In the end, that means a successful project for customer and vendor alike.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

Page 24: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Clarity and Precision: Two Approaches to Better Use-Case Descriptions

An Introduction by Kurt Bittner

The heart of the use case -- a proven technique for ensuring that the system you build will do what the customer wants it to do -- is a detailed, written description of the system's expected behavior. Often, however, the details seem to "get in the way," making it difficult to understand what the system is really supposed to do. For authors of use cases, it can be challenging to strike precisely the right balance between clarity and precision; fortunately, there are strategies and techniques that can help.

The two articles in this special subsection describe two different techniques that can be used either separately or together to reduce the complexity and improve the clarity of detailed use-case descriptions. The article by Kurt Bittner, "Managing Use-Case Details," discusses the use of glossaries and domain models as a way to present background information, information requirements, and even business rules. By using these techniques, a designer can hide supporting detail but still ensure that important information is captured.

The article by Ben Lieberman, "UML Activity Diagrams: Versatile Roadmaps for Understanding System Behavior," describes how to use Activity Diagrams, which are specified in The Rational Unified Process, to visually represent complex flows of events within use cases. This technique can be used in addition to, and in support of, textual use-case descriptions to make these event flows easier to understand.

When used in appropriate circumstances, these two techniques provide system designers with simple but powerful ways to manage the presentation of details in use-case descriptions.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

jprince
http://www.therationaledge.com/content/apr_01/ucintro.html
jprince
Copyright Rational Software 2001
Page 25: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Copyright Rational Software 2001 | Privacy/Legal Information

Page 26: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

UML Activity Diagrams: Versatile Roadmaps for Understanding System Behavior

by Ben LiebermanSenior Software ArchitectBlueprint Technologies

The core purpose of software development is to provide solutions to customers' real problems. Use cases1 are a vital aspect of a technique that has been used successfully to ensure that development projects actually focus on these problems. They are used to discover, capture, and present customer requirements in a form that is accessible to developers, testers, and other stakeholders in a development project. To detail a use case, it is critical to capture basic, alternate, and exceptional flows of execution, which represent major and minor threads of execution the system encounters as it processes customer requests.

Using the "standard" use-case form,2 these flows can be captured using plain English to describe sequential activities (see Figure 1). These descriptions are quite detailed, however, and they can be difficult to decipher -- especially within a complex set of use-case scenarios.

This article describes another way to capture these flows: by using Unified Modeling Language (UML) Activity Diagrams that depict the flows as "roadmaps" of system functional behavior. These roadmaps are analogous to AAA (Automobile Association of America) roadmaps, in that they show what routes you can take but do not indicate whether you will take them. An AAA map, moreover, supplies only enough information to identify locations of interest, leaving detailed descriptions of the road for companion travel guides. Similarly, Activity Diagrams show a comprehensive summary of use-case flows but leave the design details up to other artifacts.

We will also take a brief look at other ways to use Activity Diagrams during the development lifecycle.

jprince
http://www.therationaledge.com/content/apr_01/t_uml_bl.html
jprince
Copyright Rational Software 2001
Page 27: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Basic Flow:

1. The User requests login to the system.

2. The User enters login ID and password.

3. The System validates the User's permissions.

4. The User is presented initial system menu choices.

5. [...the use case continues...]

Alternate Flow:

1. In Step 2 the User requests a new password.

2. The User enters the login ID, and new and old passwords.

3. The System validates the User's Permissions and continues at Basic Flow Step 4.

Exceptional Flow:

1. In Step 3 of the Basic Flow and Step 2 of the Alternate Flow the User enters either an invalid login ID or an incorrect password.

2. The System returns an error condition with the string "The User login ID and/or password is incorrect."

3. Processing resumes at Step 2 of the Basic Flow.

Figure 1: Textual Descriptions of Basic, Alternate, and Exceptional Use-Case Flows

Activity Diagram Overview

The primary consumers of Activity Diagrams are the customer stakeholder, testing team, and software development staff. For them, these diagrams form the visual "blueprints" for the functionality of the system, as described and detailed in the use cases. Tracing paths (threads of execution) through these Activity Diagrams enables all stakeholders in the process to understand and monitor the development of system functionality.

The latest Unified Modeling Language (UML) specification, version 1.3,3 describes Activity Diagrams4 as a mechanism to capture business workflows, processing actions, and use-case flows of execution. Although the Rational Unified Process (RUP®) uses Activity Diagrams to detail

Page 28: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

activities for each of the nine workflows recommended for software development (Figure 2), it offers few other examples of Activity Diagram applications.

Figure 2: A Rational Unified Process Activity Diagram Illustrating the Requirements Workflow

In fact, Activity Diagrams can be used for many purposes: diagramming use-case flows; modeling complex business operations or processes (such as the one in Figure 2); depicting data and information flows; and even computing algorithms.5 In addition, as we will discuss, they can be used later in the development lifecycle for system impact analyses and to develop and track test cases. For a brief introduction to the standard icons and stereotypes used in Activity Diagrams, see the Sidebar below.

Example Use Case: Maintain User Profile

To understand the practical utility of Activity Diagrams for mapping use-case flows, let's walk through a realistic example of a use case for maintaining a user profile within a travel reservations system. To gather the

Page 29: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Activity Diagram Icons and Stereotypes

In constructing Activity Diagrams, it is helpful to use colored icons (and UML stereotypes) to indicate specific activities and visually differentiate various steps in a flow. This is particularly important for Off-Page icons (pointers to additional diagrams) that link to separate use-case scenarios.6

Action is the primary diagram element. This icon represents activities

performed by the System or Actor. Since it is the most common icon, it typically has either a neutral color or no color at all.

Presentation activities are indicated by the <<presentation>> stereotype. This stereotype

indicates that there is a conversation between the use-case Actor and the System. It represents a special category of Action activities and is used to abstract user interface details.

Exception activity occurs when there is an exceptional flow in the use case, and is indicated by

the <<exception>> stereotype. This usually represents an error condition but may also represent unusual or unexpected system behavior. If the exception is an error condition, then it is useful to summarize the error inside the icon (see Figures 4 through 7). The icon is also useful to indicate system logging and recovery activities.

Data Entry activity is indicated by the <<data entry>> stereotype, which

represents significant Actor interaction with the System for the purpose of adding, modifying, or removing data. Data Entry activities can range from simple field editing to complex visual rendering changes. This icon should be used with care to avoid cluttering the visual model with low-level data manipulation details. See "Set a Level of Abstraction" below for suggestions.

information needed for this use case, typically the System Analyst would conduct interviews with the subject matter experts to gain an understanding of the problem domain.7 The analyst could actually capture the information from these interviews directly in an Activity Diagram, or she could first write up a textual description of her findings and then create an Activity Diagram to illustrate it.

In our example, we will use Rational Rose to illustrate the development of an Activity Diagram based on a use case for maintaining an information profile for a specific customer. The use case establishes the initial boundary points for entry and exit; each step in the use-case flow will be shown as a set of activities and activity flows.

Figure 3 shows two activities from a simple use-case: 1) The user modifies his customer profile (a Presentation activity); 2) The system updates the information to a persistent store (shown here as a database icon). Note that there is no need to show all the processing steps at this stage; a typical session takes a top-down approach, starting broad and then narrowing the focus. Additionally, rather than representing the

Page 30: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

The <<connector>> stereotype represents connections to flows diagrammed elsewhere. The use of Activity

Diagrams often leads to the creation of large and complex models, so it is useful to indicate alternate flow or extension points in use-case scenarios (see Figures 4 through 7). The Off-Page icons for this stereotype can be used to automatically link to another diagram (e.g., to a separate Rational Rose diagram using an embedded link). If desired, the extension to a Rose (Unified Modeling Language) diagram can lead to a "subactivity" diagram embedded within the activity itself. One caution, however: This approach can rapidly produce a very "deep" model with multiple embedded layers. Such a model runs contrary to the ideal of a high-level "road-map," which shows an overview and leaves the details for the textual description of the use-case. Although you can use this icon to indicate <<extends>> and <<includes>> use-case relationships, often these are best represented in the main use case diagram. If they are depicted on the Activity Diagram, then they should be shown as coming off Decision Points (diamonds) with guard conditions to <<connector>> activities.

These are additional icons.

It would be easy to expand our list of Activity Diagram stereotypes and icons, but this represents a fairly complete and simple set for modeling use-case activity flows. Since the purpose of these diagrams is to enhance understanding of complex use-case flows, adding new icons (and stereotypes) should be done with caution.

concept of persistence as a separate icon, it can be embodied right in the activity name (e.g., Save to Persistent Store). Persistence is a very important part of almost every system, and it is often beneficial to show explicitly where it occurs. This information is of particular value to the Test team who need to determine where and when in a test case the information in the Persistent Store needs to be verified.

Arrows are used to indicate transitions from one action to another. The guard conditions on the transitions from the User Modifies Profile action indicate the possible paths presented to the Actor, shown here as [OK] and [Cancel].

Paths Must Have Entry and Exit Points

Since we are modeling process flow, we must include a path through the functionality that allows the user to enter and then exit the functional area to move to another. If such a path does not exist, then there is a very serious error in the model (and possibly in the system itself).

Page 31: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Figure 3: Initial Activity Diagram for the Maintain User Profile Use Case

Now, let's consider the same use case again, with the additional requirement that security must be in place for the viewing of sensitive information. Figure 4 shows the resulting diagram.

Figure 4: Addition of Security Flows to the Maintain User Profile Use Case

Leave the Details for Other Artifacts

Note that we have added two more icons to the diagram to represent a Decision Point and an Exception, respectively. Also note that the Decision Point asks if the user has valid access privileges but does not detail the permission criteria. That level of detail will be found in the use-case textual description. In addition, the login ID and password data elements are indicated next to the User Enters Security Data activity. Data elements important to the use-case flow should be indicated in a note as shown, with the remaining data left to the use-case text for full elaboration.

Finally, the diagram indicates that the Security Access Denied Exception will return to the System Presents Security Screen Presentation activity until the Try Count exceeds three attempts; then the use case will end. Note that the Exception is declarative, but the specific actions (e.g., display of an error dialog box or message) at this juncture are detailed in the Exceptional Flow section of the use-case document and Graphical User Interface (GUI)

Page 32: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

design screen shots (if they exist).

Set a Level of Abstraction

Next, let's explore some additional processing requirements. We will assume that the user needs to change his name and address, and that the system needs to assign him a customer priority category (e.g., VIP, Senior Citizen, Employee, etc.). The diagram now appears as shown in Figure 5. We have added some Data Entry activities to indicate the user's ability to change certain data elements. This may not represent the complete set of editable data elements, but it does include elements important to the processing flow. Note that the Data Entry begins and ends within the Presentation activity. This implies that the user may repeat these actions as often as necessary. This approach is intuitive for system users, who expect that the system will return to a "wait" state after they perform an action.

Figure 5: Addition of Data Modification Flows and Validation Steps to the Maintain User Profile Use Case

Click here to view full size image.

Now let's add more complexity to the model: We will assume a mandatory field for the customer name that must be correctly filled in before the user can exit the use case, as shown in Figure 6. We note the Name field next to the Presentation activity to show that it is important to the use-case flow. We have added a new Exception for when the customer name is not specified, and indicated that the Exception will reenter the main flow at the User Modifies Profile activity.

Finally, we will indicate that the User Modifies Profile activity can modify information about the user's travel preferences (assuming that this is part of the customer profile). We add the Off-Page (<<Connector>>) activity to indicate a link to another use case or use-case scenario. The name of the Off-Page activity should match the name of the use case or scenario referred to.

Page 33: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

By now, the diagram has grown quite complex, so we can re-factor to further abstract activities. For example, we can collect all of the Data Entry activities into one activity or split portions of the diagram to separate illustrations and then connect them with an Off-Page activity. In addition, some of the activities (such as System Validates Entry) may be further elaborated in a separate diagram that we indicate by simply applying the Off-Page icon (<<connector>> stereotype).

Figure 6: Inclusion of Travel Information and Use Case Connectivity in the Maintain User Profile Use Case

Click here to view full size image.

As a final example of this type of diagram, Figure 7 shows how UML Swimlanes can be used effectively to show interactions among the various actors and the system. This is not vital (the previous diagrams do not show this, for example), but it can increase understanding of which participant in the use case is responsible for which activities.

The example in Figure 7 is a credit card payment submission. The use case begins with a Presentation to the customer that specifies the credit card payment; the customer then enters and submits her card details. The system validates these values and either returns to the customer if there is an error or submits the payment to the Credit Card Service. If the card payment is accepted, then the system notifies the customer of success. If not, then the error is logged, and the customer is notified of the failure (and perhaps directed to handle the payment some other way). Note that it is easy to add features such as error handling if the Credit Card Service is unavailable, and also additional system accounting activities.

Page 34: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Figure 7. Use Of Swimlanes in an Activity Diagram to Indicate Actor/System Boundaries and Responsibilities

Click here to view full size image.

Using Activity Diagrams: Freedoms and Restrictions

As is evident from the variety of uses discussed above, Activity Diagrams allow for a great deal of freedom. They encourage the creator to use the right level of detail to "tell a story" about the system functionality. A model is a communication device, after all, so it requires an adequate level of detail to address the problem to be solved. Clarity and brevity are important to avoid visual overload, but a model should present key features of the use-case flows.

In creating Activity Diagrams, you should also observe a few key guidelines:

● Don't attempt to show system design elements. A common mistake when doing use-case specification is to move into the solution space before adequately defining the customer's true needs.

Page 35: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

A core principle of use-case specification is to focus on functionality the customer desires. If you create activities such as "Send Update Command to Profile Manager" or "Obtain Oracle Database Connection," then you are violating this key principle. The use-case Activity Diagram should serve as a guide to further analysis and design, not as a repository for design information.

● Don't substitute activity diagrams for use-case descriptions. The use-case flow diagrams are intended to summarize and supplement textual descriptions in the use cases, not replace them.

● Limit the level of complexity for each diagram action. As we saw in the example of the Maintain User Profile Use Case (Figures 4-7), the addition of more than three Data Entry activities should be collected into a common activity or split off into a separate diagram (as for the Travel Preference information in Figure 6). Use the following rules of thumb to limit complexity:

● If there are more than three possible paths (alternate or exceptional), then use additional Activity Diagrams to promote understanding.

● Use additional Activity Diagrams if the processing requires specific data elements.

● Use Swimlanes to separate concerns, particularly for Actor/System interfaces. See Figure 7 and the related discussion under "Set a Level of Abstraction" above.

● Do not use Activity Diagrams in this context to capture detailed system processing. Under no circumstances should low-level design information appear on these diagrams.

● Display as much of a use case as possible in a single diagram. If you are constrained by printable page size, then consider purchasing a large-carriage printer rather than forcing a complex diagram to fit 8.5 x 11 inch paper. Alternatively, make use of the Off-Page icon (<<connector>> stereotype) to logically separate models.

● Use a tool to maintain consistency for your models. Currently, no tool will automatically update Activity Diagrams linked to use cases, but most tools (e.g., Rational Rose 2001) will allow you to embed a diagram into the use-case model.

● Maintain your models. To have maximum benefit, your Activity Diagrams must be updated when use cases are modified. You can ensure this will happen by inserting the diagrams directly into the use cases as appendices. Moreover, the diagrams should be maintained in the same repository as the use cases. Rational Rose allows Activity Diagrams to be collected under a particular use case and for the textual representation of that use case to be linked to the same model location. This facilitates the update process and enhances the likelihood that the models will not become outdated.

More Uses for Activity Diagrams

Page 36: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

We have looked carefully at how to use Activity Diagrams for mapping flows of execution through a use case, but there are other applications for them as well during the development lifecycle. These are explained briefly below.

System impact analysis. During system maintenance and enhancement, the development staff receives many requests to locate and repair system "issues" or faults, as well as add new functionality. Use-case Activity Diagrams can be used to access the likely functional impact these changes will have on the system. By tracking Activity Diagram flows into the analysis and design models (e.g., by tracing to object sequence diagrams), you can quickly identify modules and subsystems that will be affected by proposed system changes. The changes can then be reflected in the activity models by changing the outlines of the activities to a different color (e.g., red) or thickness. This allows the test and architecture teams to rapidly assess what testing resources are necessary as well as the level of potential system breakage.

Test case development. Test cases are derived from use cases.8 Therefore, use-case Activity Diagrams can be used to create specific scenarios for each test case. This can be done by tracing a thread of execution from entry to exit through each diagram, one for each test scenario. Activity Flow Diagrams are an excellent means for the test designer to scope the test for expected system behavior.

Test case coverage tracking. If the test team is not using automated methods to track use-case test coverage, then they can use Activity Diagrams to show the progress of a testing effort. They can designate paths as major and minor to indicate testing priority. They can also highlight the diagrams to indicate which activities they covered with each test. In this way, the Activity Diagrams can provide a visual representation of test progress for each functional area of the system.

Overall: A Highly Useful Design Artifact

The UML is an excellent design and architecture language that has become the de facto standard for software system description. As we have seen, UML Activity Diagrams are particularly well suited for the discovery and visualization of complex functional process flows based on system use cases. Displaying these flows visually greatly improves the level of communication and understanding between the development staff and the customer. In addition, the test team can use these diagrams to directly aid in the creation of the test plan and test cases. Overall, Activity Diagrams represent a useful addition to the collection of design artifacts available to the software engineer.

Appendix

Rational Rose Activity Diagram "Colorizer" Script

This Rational Rose script (for Version 2000e and higher) will automatically add fill colors to the icons for Activity Views on each Activity Diagram included in a use-case model (Use Case View root package).

Page 37: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Want more information and advice on creating better use-case descriptions? See "Managing Use-Case Details" in this issue of The Rational Edge.

1 See Ivar Jacobson, Magnus Christerson, et al., Object-Oriented Software Engineering: A Use-Case Driven Approach. Harlow, Essex, England: Addison-Wesley, 1992. See also Dean Leffingwell and Don Widrig Managing Software Requirements, A Unified Approach. Boston: Addison-Wesley, 2000.

2 See Ivar Jacobson, Magnus Christerson, et al., Object-Oriented Software Engineering: A Use- Case Driven Approach. Harlow, Essex, England: Addison-Wesley, 1992. See also Dean Leffingwell and Don Widrig, Managing Software Requirements, A Unified Approach. Boston: Addison-Wesley, 2000.

3 See http://www.rational.com/uml/index.jsp for detailed information.

4 See Grady Booch, James Rumbaugh, et al., The Unified Modeling Language, User Guide. Reading, MA: Addison-Wesley, 1999. Also see James RA: Addison-Wesley, 1999.

5 See Grady Booch, James Rumbaugh, et al., The Unified Modeling Language, User Guide. Reading, MA: Addison-Wesley, 1999.

6 For a Rational Rose script to automate the application of color to activity model elements, see the "Appendix".

7 See Dean Leffingwell and Don Widrig Managing Software Requirements, A Unified Approach. Boston: Addison-Wesley, 2000.

8 The Rational Unified Process, 2000.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

Page 38: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Managing Use-Case Details

by Kurt BittnerGeneral ManagerProcess and Project Management Business Unit

Details, details! System designers seem to have a general fear of details when writing use cases, perhaps driven by the worry that clarity, the hallmark of a good use case, will be lost in a blizzard of details if they are not careful. Although this fear has some foundation in reality, unfortunately the typical response -- to omit all details from the use-case description -- results in use cases that lack specificity and real value.

As the old saying goes, "The devil is in the details," meaning that most of the real problems become apparent only when you get down to specifics. If we are to write effective use cases, then we must present details. So how can we overcome our fears about them?

The best strategy is to plunge ahead. All the details will be needed at some point, and you can move them to other artifacts later on. As Franklin Roosevelt said, "We have nothing to fear but fear itself." Sometimes fear of detail is just procrastination in disguise: We worry about documenting a behavior with too much detail before we even start writing anything and then find ourselves unable to get started. Instead, try to put aside your fears and dig into the specifics of the required behavior to describe exactly what the system will do. If you do not provide enough detail, then there is a real possibility of failure: The development team will have to re-discover the requirements even after they read your use-case description. You will have wasted everyone's time, including your own.

This article discusses use-case flow description and presents several strategies for managing details in these descriptions, including using glossaries and domain models, plus representing complex business rules and other special requirements. En route, a number of examples are used to illustrate various aspects of the problem. So without further

jprince
http://www.therationaledge.com/content/apr_01/t_usecase_kb.html
jprince
Copyright Rational Software 2001
Page 39: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

introduction, let's get started.

How Much Is Just Enough?

Or, put another way, "How can we write use cases without 'losing the forest for the trees'"?

A use case needs to unambiguously describe the required behavior, no more and no less. If the system must respond in a certain way to a certain event, then you must say so in the use case. The trick is to express the behavior without dictating or constraining the design. For example, consider the following simple use case:

Example (from an Automated Teller System)

Use Case - Withdraw Cash - Main Flow

1. The use case starts when the customer inserts his bank card.

2. The system reads the card and requests the customer to enter a Personal Identification Number (PIN).

3. The system presents a menu of choices.

4. The customer indicates that he wishes to withdraw cash.

5. The system requests the amount to be dispensed, and the customer enters the amount.

6. The system dispenses the desired amount of cash, prints a receipt, and ejects the card.

7. The customer takes the cash, card, and receipt.

8. The use case ends.

Simple enough, right? Actually, too simple; all sorts of important details are missing: What information does the system read from the banking card? What does the system do to verify that the correct PIN has been entered? How does the system know that the customer has sufficient funds in his account? What information gets printed on the receipt?

Look what happens if we start including this information (in italics):

Example (from an Automated Teller System)

Page 40: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Use Case - Withdraw Cash - Main Flow

1. The use case starts when the customer inserts his bank card.

2. The system reads the bank card, obtaining the bank number, the account number, and the Personal Identification Number (PIN) from the card. The system then requests the customer to enter his PIN. The PIN can be up to 6 digits in length and must not include any repeated digits.

3. The system compares the entered PIN to the PIN read from the card to determine if the PIN entered is valid.

4. If the PIN entered is valid, then the system presents a menu of transactions: Withdraw Cash, Deposit Funds, Transfer Funds, See Account Balances.

5. The customer indicates that he wishes to withdraw cash.

6. The system requests the amount to be dispensed, and the customer enters the amount.

7. The system checks to see if it has sufficient funds in its dispenser to satisfy the request.

8. The system ensures that the amount entered is a multiple of $5 and does not exceed $200.

9. The system contacts the customer's bank to determine if the amount requested can be withdrawn from the customer's bank account.

10. If the customer has sufficient funds on hand, then the system dispenses the desired amount of cash.

11. The system prints a receipt with the following information:

● Date and time of transaction

● Location of the ATM

● Customer's bank number

● Type of transaction

● Amount of the transaction

● Transaction identifier (for tracking within the inter-bank network)

12. The system ejects the card.

13. The customer takes the cash, card, and receipt.

14. The use case ends.

Of course, the description gets quite a bit longer when we add details about what the system does and what information is captured. Some of

Page 41: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

you are probably thinking that this is too much detail, but ask yourself this: If you were paying someone lots of money to develop the system, wouldn't you want to know exactly what the system was going to do? Also note that none of the "details" dictate how the system should be designed.

So, if it takes almost a page to describe one use case for a "simple" system like an ATM, you may be thinking, how many pages would a "real" system consume? And how can you keep the details from overwhelming the reader? We'll look at some strategies for managing details below.

Create a Glossary

Use cases often contain information that can be presented more effectively in other ways. Creating a glossary of terms is one way to present background information that can otherwise be distracting to the reader.

The Withdraw Cash use case, for example, includes a number of terms that need to be defined; we will highlight some of the key terms in boldface text in our description:

Example (from an Automated Teller System)

Use Case - Withdraw Cash - Main Flow

1. The use case starts when the customer inserts his bank card.

2. The system reads the bank card, obtaining the bank number, the account number and the Personal Identification Number (PIN) from the card. The system then requests the customer to enter his PIN. The PIN can be up to 6 digits in length and must not include any repeated digits.

3. The system compares the entered PIN to the PIN read from the card to determine if the PIN entered is valid.

4. If the PIN entered is valid, then the system presents a menu of transactions: Withdraw Cash, Deposit Funds, Transfer Funds, See Account Balances.

5. The customer indicates that he wishes to withdraw cash.

6. The system requests the amount to be dispensed, and the customer enters the amount.

7. The system checks to see if it has sufficient funds in its dispenser to satisfy the request.

8. The system ensures that the amount entered is a multiple of $5 and does not exceed $200.

9. The system contacts the customer's bank to determine if the amount requested can be withdrawn from the customer's bank account.

Page 42: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

10. If the customer has sufficient funds on hand, then the system dispenses the desired amount of cash.

11. The system prints a receipt with the following information:

● Date and time of transaction

● Location of the ATM

● Customer's bank number

● Type of transaction

● Amount of the transaction

● Transaction identifier (for tracking within the inter-bank network)

12. The system ejects the card.

13. The customer takes the cash, card, and receipt.

14. The use case ends.

Although we could define these terms in the use case itself, there are good reasons not to:

● It would be distracting and get in the way of the flow of events description.

● Other use cases for the system probably use the same terms, so it is more efficient to define the terms once, in one place.

So instead, we can put the terms in a glossary. It's best to start creating a glossary as soon as special terms start popping up. This often happens while you are still discovering the use cases, so simply compile a list of terms to define later.

From the use case description above, we can begin creating a glossary:

Example Glossary for ATM System (partial)

Page 43: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Customer A person who holds one or more accounts at a member financial institution of the ATM inter-bank network.

Customer's bank The financial institution that issued the bank card, and at which the customer holds one or more accounts.

Account A service provided by a financial institution to maintain a customer's money or securities. Each account is assigned an account number. The financial institution is obligated to pay the customer, upon demand and adhering to the terms of the account agreement, a defined sum of money.

Bank card A physical identification device imprinted with magnetic information pertaining to the issuing financial institution:

● Bank number (assigned to the issuing financial institution by the government)

● Customer number (assigned by the issuing financial institution to the customer)

● Personal Identification Number (PIN), chosen by the customer at the time the card was issued

Personal Identification Number (PIN) An identification number, chosen by the customer, used in conjunction with the card for security purposes. The PIN can be up to 6 digits in length and must not include any repeated digits. To verify the identity of a customer, the ATM system requires the customer to enter her PIN; when the customer enters the same number as the PIN stored on the card, the system authenticates the customer's identity and allows her to proceed with account transactions.

Receipt A physical, printed record of the transaction(s). The receipt presents the following information:

● Date and time of transaction

● Location of the ATM

● Bank number

● Type of transaction

● Amount of the transaction

● Transaction identifier (for tracking within the inter-bank network)

Actually, the text contains additional terms we may want to include in this glossary: dispenser, menu of transactions, amount to be dispensed, and

Page 44: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

so on. For now, however, we will use this evolving glossary to see how we can begin slimming down our use case. Note how we have moved some details from the use case to the glossary in the example shown below:

Example (from an Automated Teller System)

Use Case - Withdraw Cash - Main Flow

1. The use case starts when the customer inserts his bank card.

2. The system reads the bank card information.

3. The system compares the entered PIN to the PIN read from the card to determine if the PIN entered is valid.

4. If the PIN entered is valid, then the system presents a menu of transactions: Withdraw Cash, Deposit Funds, Transfer Funds, See Account Balances.

5. The customer indicates that he wishes to withdraw cash.

6. The system requests the amount to be dispensed, and the customer enters the amount.

7. The system checks to see if it has sufficient funds in its dispenser to satisfy the request.

8. The system ensures that the amount entered is a multiple of $5 and does not exceed $200.

9. The system contacts the customer's bank to determine if the amount requested can be withdrawn from the customer's bank account.

10. If the customer has sufficient funds on hand, then the system dispenses the desired amount of cash.

11. The system prints a receipt for the transaction.

12. The system ejects the card.

13. The customer takes the cash, card, and receipt.

14. The use case ends.

Now that we have the glossary, we don't have to define these terms again when we write the next use case. Sometimes, however, a glossary is not the only strategy we need to manage details.

Add a Domain Model for Interrelated Concepts

Although our simple ATM system does not illustrate the point very well

Page 45: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

(since it deals with very simple information), often glossary terms are tightly related. Consider the following definitions for an online order entry system.

Order A contract between the company and a customer to provide a set of products to a particular customer. An order includes an order date, a shipped date, and order number, plus an item list and product information.

Line Item A number or alpha-numeric combination that specifies both the particular product and the quantity of that product being ordered. Appears on an order.

Customer Purchaser of the items on an order.

Product Commodity that is being sold to the customer. The product information on an order includes an identification number, description, and unit price.

Although it's perfectly acceptable to present this information in a glossary, as you may have noticed, there are a lot of cross-references in the definitions. That's a tip-off to the existence of structured relationships among the concepts.

Moreover, these structured relationships are actually evident within the entries: Each entry captures more information than just a simple definition of a term and would not be complete without reference to other terms. For information like this, which has structure or tight interrelationships, a domain model can be a useful presentation strategy.

A domain model provides a way to capture the relationships among concepts, as well as the structure of information within those concepts. Domain models are typically represented in diagrams (see Figure 1). Business object models1 and enterprise data models (the parts that relate only to business terms) are types of domain models.

Page 46: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Figure 1: A Simple Domain Model Diagram

Use the Model for Definition, Not Design

A domain model does not represent the design of the system; it simply defines in precise terms a set of concepts used in the problem domain. The ability to visualize these concepts can help the team and various stakeholders agree on the definition of these concepts. Despite the fact that the domain model will eventually give rise to design elements, it is not wise to start capturing design information while working on the use cases. It may be tempting to say to yourself, "It's work I'll need to do eventually, so why not start now?" But in truth, it's far better to focus on doing one thing at a time and doing it well.

The purpose of the domain model is to clarify concepts and facilitate communication. If half the team starts diving deeply into system design, then you will lose sight of this goal, and the design will suffer. Before you can develop a really good design, you must understand the problem, including all the key concepts. Simple problems can quickly become more complex if you fail to focus on just one thing at a time.

Capture Simple Business Rules

Many use cases encompass business rules that relate to how information is validated. For example:

● Postal codes in addresses must be complete and correspond to valid codes.

Page 47: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

● Product prices must be positive and end in whole dollar amounts.

● Customers can order only in-stock products.

These simple rules can often be captured in the domain model. You can specify minimum and maximum amounts as well as other validation criteria right along with the definition for the information itself. This relieves you from the tedium of describing every criterion and prevents the reader from getting lost in those details. Use cases excel at describing what happens and when -- i.e., real flows of events and activities. If you interrupt the description repeatedly to describe how data is validated, then your reader will lose a sense of the flow.

Other business rules relate to the way that work is performed or the way information is used or updated. Again, you can incorporate many of these rules into the domain model to simplify your work and make the activity flows easier to understand. More complex business rules, however, may require a different approach. See "Capturing Complex Business Rules" below.

Using Both a Glossary and Domain Model

If you use both a glossary and a domain model to manage details, it is important to decide where to define a term and then do it in only one place. If the concept is related to other concepts in well-defined ways (e.g., orders have items, which refer to products, etc.), then use the domain model. If the concept is simply a standalone term, then use the glossary. Establish clear guidelines on which artifact to use and when, and then apply them consistently.

It is also important to keep in mind that neither the glossary nor the domain model is intended to be a design document; they both exist only to clarify the requirements and use cases, not to start defining the system's internal structures. It is tempting to say, "Well, since we are doing a domain model, let's define all the things in the problem domain so that we'll have a complete model." That is all very well and good, but it's probably not what you're getting paid to do (which is probably to deliver a specific system). If the glossary and domain models help to clarify the problem domain and make it easier to describe what the system should do, then they have done their duty. To ask for more is to ask for trouble.

Capturing Other Information

Some of the information we classify as "detail" does not lend itself to either a glossary or a domain model. We will discuss strategies for capturing this information below.

Complex Business Rules

As we saw above, a domain model provides an excellent way to capture simple business rules, especially those that constrain relationships between things or specify the validation of information. These simple rules can be captured as requirements and traced to the use cases to which

Page 48: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

they relate.

Other business rules, however, are requirements that constrain how the business itself works; they are independent of how the solution supports the business. These rules may require different responses. Here are some examples:

● A customer must be a member of the cooperative to make a purchase.

● A customer may have more than one outstanding order.

● A product may be sourced from more than one supplier; the product may have different prices depending upon the supplier.

● Customers whose bills go unpaid for more than 60 days will be referred to a collection agency.

The first three rules can be captured through relationships in the domain model. The last rule will need its own use case because it requires the system to make a decision and take some course of action.

Non-Functional Requirements

Use cases are great for representing functional system requirements: They describe how users interact with the system and what the system does in response. But many system requirements are non-functional: They state various conditions or constraints with which the system or its developers must comply. They may relate to general functionality, usability, reliability, performance, and supportability of the system. Teams often run into great difficulties when they try to apply use cases to handle these requirements.

There are several classes of these non-functional requirements:2

1. Non-functional requirements that apply to the system as a whole.

These may relate to overall quality and cost goals, or they may be general statements of characteristics that apply to the entire system. For example:

● The system shall be portable across Microsoft Windows and UNIX platforms (including Linux).

● The system shall have no more than 10 hours of scheduled downtime per year and must have no unscheduled downtime.

● The system shall have a mean time between failure (MTBF) of no less than 10,000 hours.

● The capitalized system development cost shall be no more than 10 percent per delivered unit.

These requirements cannot be traced to any particular use case and should simply be managed at the project level, even if they will have a

Page 49: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

significant impact on the system's architecture. One could trace these requirements to all use cases, but this would quickly become burdensome. In addition, satisfying these requirements requires a coordinated solution; if you were to assign them to individual use cases, then different teams assigned to different use cases might pursue different strategies for them.

2. Non-functional requirements that are constraints on the solution.

These requirements dictate that specific technologies be used, or that specific algorithms or approaches be employed. They typically ensure that the new system will be compatible with the technologies of existing systems, or to make sure that certain approaches are followed to ensure consistent results. For example:

● All measurements reported by the system shall be provided in metric units, and all inputs to the system shall be made in metric units.

● The system shall utilize an X/Open XA-compliant transaction-processing monitor.

● The system shall access information using an ODBC-compliant database interface.

● The system shall report system management events using the SNMP standard.

In some instances, these constraints may apply only to a few use cases: reporting system management events or using a particular database interface standard, for example. If the requirement applies only to a specific use case or a small number of specific use cases, then trace it to the applicable use cases. Otherwise, leave it as a general requirement on the solution.

3. Non-functional requirements that relate only to a single or a few use cases.

Some non-functional requirements relate to specific use cases. These often convey additional qualities that the system must support when providing the functionality described in the use case. For example:

● Transaction Y must be completed in less than two seconds.

● The system must support a minimum of 200 concurrent instances of use case X.

These requirements should be traced directly to the use cases to which they apply.

As the use cases evolve, keep track of these links. When presenting a use case you will also need to present the non-functional requirements that apply to the use case. Typically, developers place these in a separate section of a use case report along with other special requirements.3

Page 50: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Special Requirements

Some supplementary requirements are hard to represent in the use case itself and yet are important to track to the use case. For example:

● Reliability requirements, such as "The system must be operational continuously, with less than four hours of scheduled down-time per year, and with no unscheduled down-time".

● Constraints on system design and implementation, such as "The system must conform to IEEE standard XXX," or "The system must be implemented in the Java programming language and must utilize J2EE," or "The system must run on a Microsoft Windows platform," or "The cost of the system must be no more than X per unit."

● Security requirements, such as "All access to system capabilities must be authorized."

These requirements do not belong in the use-case description because they really do not affect the flow of events,4 but it is very important that they not be forgotten when the use cases are designed and implemented.

These requirements can either be documented in a separate section in the use-case description as "Special Requirements," or simply traced to the use case (if you're using a requirements management tool).

Different Details, Different Strategies

We have seen that use-case descriptions should be detailed; without the details, the use case cannot describe what the system will really do and will not enable developers to understand it. It is no secret, however, that details can sometimes get in the way of understanding. The strategies described above provide a number of ways to manage these details in a way that promotes better understanding. To decide which ones are best for your project, carefully assess the details you need to include. Briefly:

● Use a glossary to define simple concepts that have limited relationship to other concepts.

● Use a domain model in conjunction with the glossary if the concepts are interrelated to represent the structural relationships between the concepts. Include simple business rules in the domain model.

● Place non-functional and special requirements (reliability, design and implementation constraints, security) in a separate "Special Requirements" section within the use-case description or trace them to the appropriate use case.

As with any art, the right approach will probably involve blending these strategies in proportions guided by experience.

Page 51: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Want more information and advice on creating better use-case descriptions? See "UML Activity Diagrams: Versatile Roadmaps for Understanding System Behavior" in this issue of The Rational Edge.

1 Jacobson et al., in The Object Advantage, use a business object model to capture the dynamics of the entire business process. In this context, the domain model is a subset of this business object model containing just the parts that identify key business entities and their relationships.

2 This is not intended to be an exhaustive taxonomy of the types of non-functional requirements; it's just a characterization to help explore some of the typical problems encountered when mapping non-functional requirements to use cases.

3 Note that the important thing is not what you call these "special" requirements, but that you keep track of which of them apply to which use cases.

4 One exception might seem to be the "security" requirement. If the requirement specifies a particular sequence of events to the authorization, the flow of events may be affected, but in many cases there is simply a requirement that security must be guaranteed and it is up to the developers to ensure that the requirement is met. For these cases, making the requirement a "special requirement" of the use case is usually sufficient since including the security-related behavior merely gets in the way of understanding what the system is really supposed to do. These kind of requirements are often the most troublesome -- confusion over how to handle them often gets in the way of describing the real required behavior of the system.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

Page 52: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Run-Time Debugging with Microsoft Visual Studio and Rational Purify

Part I: Debugging Win32 Applications

by Goran BegicTechnical Marketing Engineer, Development SolutionsRational Software B.V.The Netherlands

When it comes to assessing how much progress we have made in improving the quality of software

applications over the years, we certainly owe ourselves a pat on the back. Nevertheless, we should never lose sight of one incontrovertible fact: Software applications will always have bugs. After all, these programs are designed by humans, and we are certainly not free of bugs ourselves.

So debugging -- the extremely slow and expensive process of fixing defects -- will always be part of the larger software development process. In a wider sense, however, debugging also includes various programming techniques that enable developers to anticipate potential weak spots in the developed application. As the development process has advanced, debugging has become much more complicated. In fact, to examine the whole spectrum of debugging activity today, we actually need to look at the entire development process.

And, as any "debugger" will tell you, locating the real cause of a defect is the hardest task of all; fixing a problem in the code is by far the easiest part of debugging. In this two-part series, we will discuss tools and methods that can help with both parts of the process. This first article will introduce you to the Microsoft Visual Studio Programming Environment and discuss ways to use the Microsoft Visual Studio Compiler for initial debugging. The next article, which will appear in the May issue of The Rational Edge, will cover run-time debugging with the Microsoft Visual Studio Debugger and Rational Purify.

Many Species of Bugs

What is a bug, exactly? If your application crashes on every machine it's

jprince
http://www.therationaledge.com/content/apr_01/t_debug_gb.html
jprince
Copyright Rational Software 2001
Page 53: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

installed on, then you know you have one. But what if your tests looked great, you proudly released your application, and suddenly, several important customers reported an embarrassing "feature" that can be reproduced only on some machines in certain configurations? Yes, you'd have to call that a bug as well.

In fact, there are many different kinds of problems that we call bugs. Data corruption is at the top of the list, but an application can also perform poorly because of design flaws or even a confusing user interface. Have you tried using the 'CTRL+F' keyboard shortcut in Microsoft Outlook, for example? Instead of the 'Advanced Find' window it is supposed to bring up, this shortcut produces the last message for forwarding.

Debugging Methods and Techniques

One of the current watchwords for software quality improvement is "defensive programming." This is an approach that comprises a number of techniques and strategies for writing code that can help with early error detection. For example, writing clean code, with notation that is understandable and adopted by all developers working on the same program, can help cut down on bugs. Additionally, defensive programming encompasses the extensive use of macroinstructions and functions provided by the programming language that enable checking of important events during the program execution.

Debugging should also be done during the process of measuring software quality. Having firm criteria for software quality, moreover, enables project managers to decide on future development targets for the system.

In order to avoid unexpected delays caused by bugs in the code, debugging should start as early in the development cycle as possible. Ideally, it would begin with project planning, in setting up requirements for the first proto build, for example. Putting too many features in the early prototype increases the probability of having defects in the code, and this can lead to numerous engineering days lost in fixing program features. It takes much more time to debug feature-rich software than to add features to a fairly simple, bug-free application.

You can also help increase software quality by maintaining a consistent programming environment across all project teams, documenting all changes in the code -- using a source control tool -- and conducting regular smoke tests of the modified program. If all changes in the source of the developed program are precisely documented, then new bugs are most likely to appear in the most recent code changes. Focusing on these changes can substantially decrease the time it takes to find the real cause of an error.

As we mentioned earlier, however, no matter how carefully or defensively you code, and even if your program seems to be running correctly, there is always a possibility that the program contains malicious bugs. Software development tools are built with this assumption in mind and contain features that can help in the constant struggle against defects.

Page 54: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

What are the basic tools a developer-warrior needs? There are four:

● An editor

● A compiler

● A debugger

● An automated run-time debugger

Without these tools, you cannot develop a successful application. The automated run-time debugger is a fairly recent addition to the list of essentials; its main task is locating errors during program execution, a task that may be very difficult to achieve manually -- even for software development gurus.

The Visual Studio Programming Environment

In this article we will take a close look at run-time debugging of a piece of code built with the Microsoft Visual C++ compiler. Of course, there are several other commercial compilers and also some free compilers available, but I chose this one because it is most probably the one that you have installed on your computer. It definitely does not produce the fastest code, it is not bug free, and it is not the most ANSI C++ compliant compiler, but it is widely used for C++ Win32 development.

Let's begin by explaining some of the terms and tools we will be discussing. Although they are well known to Visual C++ developers, they may be unfamiliar to people from a UNIX or Java world.

Visual Studio Integrated Development Environment (IDE). Most applications get developed through this environment, which consists of the Visual Studio editor and numerous menus and toolbars for invoking additional programming tools: the Microsoft Visual C++ compiler and linker, Visual Studio Debugger, etc. It is also possible to use other editors to write code and invoke the Visual C++ compiler and linker from the command line.

VC++ Project. A Visual C++ project is a folder in the Project Workspace folder that contains all the files used to build the application for that project. It is created automatically when you choose the type of application you will be developing. You can also create an empty project and write the program from scratch. When you build the application, it will be created in the project subdirectory by default. There are two default options for the type of binary that is going to be created: "Debug" and "Release." "Debug" binary contains additional information that can help to debug the program, and the release version is basically the one that will be shipped to the customer. It is optimized in size, or in speed, and it does not reveal any information about the source of the application to the final user. The project definition is saved in the file with the extension .dsp, and the project workspace information is in the file with the extension .dsw.

Debugger. A debugger is a tool that controls the program execution in a way that enables the user to step through every instruction of the

Page 55: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

application and examine all the variables used by the program, every memory allocation, and the content of the processor's registers. Since even a fairly simple application can contain thousands of machine instructions, stepping through each of them would require understanding the machine code, and it would take forever to examine the application. The debugger (in our case Visual Studio Debugger) can be controlled by setting up breakpoints on the lines of source code; these are places where the user can thoroughly examine the running application.

Visual Studio Debugger Windows. When Visual Studio Debugger starts running a developed application it opens some default windows; it also gives you an option to open additional windows that can be helpful in examining the application. Figure 1 shows a screenshot from Visual Studio Debugger with windows I typically open.

Figure 1: Useful Windows for Visual Studio DebuggerClick here to view full size image.

The main window in the Visual Studio Debugger GUI (Graphical User Interface) is the Standard Editor Window. An arrow marker on the left border of the window marks the position in the source where the execution of the application is stopped. If a source file for the machine instruction where the application is stopped is not available, then the debugger will show the Disassembly Window with the raw machine code.

On the right side of the Visual Studio Debugger GUI, you can see two windows called Registers Window and Call Stack Window.

● The Registers Window monitors the content of the CPU registers together with the names of the available registers.

Page 56: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

● The Call Stack Window displays the list of the function calls as they have been executed. Double-clicking on a function in the Call Stack Window will automatically open the source for the chosen function if the source code is available, or it will open the Disassembly Window with the correspondent machine instruction.

The window beneath the Call Stack Window is the Memory Window. The Memory Window displays the contents of virtual memory for a specified address. You can enter the address of the particular interest in the Address box at the top of the Memory Window. It is possible to drag and drop addresses and variables from the source code or from other debug windows.

The last two windows in the Visual Studio GUI are the Variables Window and the Watch Window.

● The Variables Window consists of three tabs: "Auto," which displays the variables in the current statement and the previous statement; "Locals," which displays the variables local to the current function; and the "This" tab, which displays the object pointed to by the "This" pointer.

● The Watch Window monitors variables and expressions while the program is undergoing debugging. The expressions can be entered into it directly, or they can be dragged and dropped from either the source or other debug windows.

Using a Compiler for Debugging

A compiler is a program that converts source code into machine language. The compiler's functionality does not stop there, however. It can detect and report various errors in the compilation process as well as some of the potential problems related to statically allocated memory. You can save a lot of headaches by choosing proper project settings, for example, if you pay attention to compiler warning levels. Almost every compiler will make you aware of syntax errors, although having code with the correct syntax does not mean that the code is bug free. Let's look at an example:

PLeftEdge = new char(strlen(pLeft) + 1);strcpy(pLeftEdge, pLeft);pLeftEdge = new char(strlen(pRight) + 1);strcpy(pRightEdge, pRight);

If we don't check the parameters of strcpy() before we use it -- and if we use our copy/paste functions a lot -- this is the kind of a bug that is likely to plague us. A compiler will not complain about this code because the syntax is correct. Running a program with this code will quickly reveal the problem, however, because a crash is very likely: We are using an uninitialized string as a parameter for strcpy(pRightEdge, pRight). I will return to this example later when we try to debug an application with a similar problem.

Page 57: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

A compiler can be very helpful in many other situations, however. Let's take a closer look at run-time debugging for a piece of code built with the Microsoft Visual C++ compiler.

Setting Up Visual C++ for Debugging Projects

The Visual C++ compiler offers you two default groups of settings: "Release" and "Debug." There is no special compiler switch to set either of them, and the compiler will not complain if you decide to change the two default sets of options. For C++ these modified options will still be "Release" and "Debug." So, one might ask, what is the difference between "Debug" and "Release"? The "Debug" settings contain options that help in debugging the application, whereas the "release" set of options improve the application's performance. Since many developers prefer to test a version of the application that's very close to the final release version, I will provide information below that is useful for debugging a release version and explain some of the differences between "Release" and "Debug."

Symbolic Debugging Information

The first main difference between "Release" and "Debug" is that "Debug" settings create symbolic debugging information by default. If you'd like to debug the release build of your program, make sure to include compiler and linker settings for creating symbolic debugging information. Debugging symbols are placed in a separate file, or section, of the executable with information that connects assembly level instructions to the source code. Without this, the debugging can be done only on the assembly level; and although there are probably still people who can read assembly, it is certainly not the easiest way to debug.

There are several different types of symbolic debugging information. The default type for Microsoft compiler is the so-called PDB file. The compiler setting for creating this file is /Zi, or /ZI (which creates a PDB file with additional information that enables a feature called "Edit and Continue").

A PDB file is a separate file, placed by default in the "Debug" project subdirectory, which has the same name as the executable file with the extension '.pdb'. Please note that Visual C++ 6 compiler by default creates an additional PDB file called VC60.pdb. This file is created during the compilation of the source code, when the compiler is not aware of the final name of the executable. The linker can merge this temporary PDB file into the main one if you tell it to, but it will not do it by default.

The default linker option in Visual Studio is /PDBTYPE:SEPT for separate PDB files. You should change it to /PDBTYPE:CON in both "Debug" and "Release" builds. The string "CON" most probably stands for "consolidated." If you are changing the project settings through the Visual Studio GUI, it is enough to uncheck the "separate files" option for the creation of the PDB file in the linker settings, section DEBUG. I suppose that there are minor performance advantages in building the program if you use separate PDB types, but this performance gain can be completely ignored on today's machines. It is important to have the debugging symbols in one file in case you would like to test your program on another

Page 58: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

machine -- and especially if you use other tools that need symbolic debugging information to test the application. To be complete, the PDB file must match the executable file; it must be located in the same directory and created without the linker option we mentioned.

Other debugging symbol-types are "C7 Compatible" (compiler setting /Z7) and "Line numbers only" (compiler setting /Zd). The C7 Compatible type of symbolic debugging information will be part of the executable file, and it will contain line numbers. Those who remember DOS often still refer this to as "CodeView." The "line numbers only" will not have full symbolic debugging information.

Relocation Information

Relocation information, or "relocs," is a table of basic relocations saved in the executable binary. This information is used for all the addresses that need to be patched when the executable module is loaded into memory on a base address other than the preferred base address set by the linker. If the executable module is loaded to the preferred base address, then the relocation information is ignored. Run-time debugging tools like Rational Purify use this information for module "instrumentation," since Purify often needs to rebase the instrumented module in memory if the original location is not free to be used. The project setting that forces a creation of the reloc section of the module is the linker setting /FIXED:NO. This is not a default setting for the release version, and it needs to be added manually to the list of options in the linker setting dialog box.

Optimization

The purpose of the optimization is to create the smallest, or the fastest, object code possible. To achieve these goals, the compiler performs various changes to the assembly code. For example, it eliminates the dead code, removes the redundant expressions, optimizes loops, and uses inline functions.

The optimization should be turned off for debugging purposes in both the "Debug" and "Release" versions of the executable. Optimized assembly code makes it difficult for the debugger to determine the line of code that corresponds to the correct assembly instruction. That means it is much more difficult to set the breakpoints when debugging optimized applications than it is if you're debugging a non-optimized version of the executable.

Linking Against the "Debug" Run-time Library

The default project settings for the "Debug" build will link a program against the "Debug" version of the C Run-Time Library: "Debug C Run-Time." This can be very helpful for catching bugs in your code because it uses the "Debug" version of dynamic memory allocators, which, for example, add bit patterns to mark the allocated memory. Additionally, the "Debug" memory allocators mark the freed memory to help detect free memory reads and writes. They also allocate so called "guard bytes" on the boundaries of each newly allocated chunk of memory to check for boundary violation errors.

Page 59: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

The "Debug" versions of memory allocators are malloc_dbg and heap_alloc_dbg. All the calls to malloc() and new() will end up calling the "Debug" heap functions. The deallocation functions free() and delete() will call debug() heap deallocation function free_dbg.

The patterns used to mark the allocated memory are as follows: The uninitialized memory is marked with bytes 0xCD; the boundary zone between allocated structures is marked with bytes 0xFD; and the freed memory is marked as 0xDD.

Using the Debug Run-Time Library in the "Release" build would almost turn it into the "Debug" build; moreover, if you have one of the commercial run-time debugging tools, it is obsolete. It may be more important to link your executable against the dynamically linked version of the Run-time Library in the "Release" build (default compiler option /MD).

Warning Levels

The default warning level for Visual Studio projects (both "Debug" and "Release") is Level 3 (compiler option /W3). This warning level will, for example, report function calls that precede function prototypes. For debugging purposes, however, increasing the warning level to /W4 can help in detecting the usage of uninitialized local variables, as well as the usage of initialized local variables that were not referenced. This is a very useful, but less known feature of the Visual C++ compiler. That is partially because the warning level /W4 creates a lot of bogus warning messages that make it difficult to focus on real potential problems. However, if you treat all warning as errors, then the code you are writing will certainly have many fewer bugs than you could achieve with the default warning level. Another useful feature of the /W4 setting is that it prevents assertion errors.

Catching "Release" Build Errors in the "Debug" Build

The Microsoft Visual C++ 6 compiler option \GZ initializes all local variables with the value 0xCCCCCCCC and also checks the call stack after every function call. These very valuable features are enabled for the "Debug" build by default, but they need to be turned on manually for the "Release" build.

Run-Time Debugging: It's All About Memory

When you decide to execute your application, it is loaded into the memory first. Actually, since Windows uses memory mapping extensively, only parts of the executable that are needed will be paged into the memory at any given time. The PE (Portable Executable) formatted files, as used on Windows, structurally look very much like their images in memory.

Stack and Heap

When a process starts (when you start the main executable of your application, for example), pages of memory are used to store all static and

Page 60: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

dynamic data for the program. Every process owns at least two memory regions:

● Stack, or static data blocks, is the memory area where automatic variables are allocated. On Win32, every thread gets its designated static memory region. The size of the stack for the main program thread is defined during compilation, and by default its value is set to 1 Mbytes. The default stack size can be changed with the linker option /STACK:reserve[,commit] This value can be overwritten by using the STACKSIZE statement in a module definition file (.DEF), or by changing it in the binary itself with the EDITBIN.EXE tool (Microsoft COFF Binary File Editor).

● Heap, or free store, are independent regions of virtual memory, identified by handles and limited only by available virtual memory. The dynamic structures and handles on the heap are allocated during run-time.

Every process has at least one default heap, but processes can also have many other dynamic heaps. Blocks of memory, identified by pointers, are sub-allocated from the heap at run time and are managed by the heap API's. The memory used by the default heap is private to the process that created it and cannot be shared amongst other processes. The default heap's initial reserved and committed memory region size is defined during linking. The size of the default heap (1 Mbyte) can be changed with the linker option /HEAP:reserve[,commit]. This value can be changed in already linked binary with the EDITBIN utility, just as in the case of changing the default stack size.

There are three different types of memory allocation API's on the heap:

● GlobalAlloc/GlobalFree and LocalAlloc/LocalFree for all memory allocations in the default process heap.

● COM Imalloc allocator (CoTaskMemAlloc/CoTaskMemFree) for memory allocation in the default process heap.

● C Run-time memory allocation APIs -- new()/delete() and malloc()/free() operators for memory allocation on the private heap created by the C Run-time.

VirtualAlloc() and VirtualFree(), however, are Win32 API's for the direct allocation of pages in virtual memory. VirtualAlloc() and VirtualFree() can be called directly from a Win32 application, but the extensive use of these API's is ineffective unless you want to allocate large chunks of memory at once.

This is where the hardest part of debugging begins -- in controlling and debugging dynamically allocated structures. The first step, however, is obvious: You need to run the application. We will discuss this step in detail in next month's issue of The Rational Edge.

Page 61: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

Page 62: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Analysis and Design with the Universal Design Pattern

by Koni BuhrerSoftware Engineering SpecialistRational Software

Developing large software systems is notoriously difficult and unpredictable. Software projects are often canceled, finish late and over budget, or yield low-quality results -- setting software development apart from established engineering disciplines. As others have observed, the key to successful software development is a good, resilient architecture. The Universal Design Pattern (which I discussed in the December 2000, January 2001, and February 2001 issues of The Rational Edge) is a set of design rules that can help software developers create quality architectures in a systematic fashion.

In this fourth article, I will outline how the rules of the Universal Design Pattern should be applied for software analysis and design. The Universal Design Pattern fits very nicely into the framework of the Rational Unified Process (RUP®): A software architect can easily apply the Universal Design Pattern when deriving the initial architectural design of a software system from a use-case model.

Let me start by summarizing the key features of the Universal Design Pattern, a design method based on an abstract set of design elements and narrative rules. The Universal Design Pattern is truly universal because it applies to software of all application domains. Software developers can use the Universal Design Pattern no matter what design language, modeling tools, or programming languages they employ.

The Universal Design Pattern features four types of design elements: Data Entities, I/O Servers, Transformation Servers, and Data Flow Managers. Each type of design element is concerned with exactly one specific aspect of system operation:

jprince
http://www.therationaledge.com/content/apr_01/t_udesign_kb.html
jprince
Copyright Rational Software 2001
Page 63: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

● Data entities represent the input data, output data, and internal persistent data of the software system. Data entities are solely concerned with data structures and primitive operations on those data structures. A data entity is an entirely passive object -- pure data -- oblivious to system algorithms, external interfaces, or sequence of actions.

● I/O servers encapsulate the external (hardware) interfaces and internal databases the software system interacts with. I/O servers are solely concerned with input/output and servicing external (hardware) interfaces. An I/O server may be concerned with I/O timing, but not with scheduling or triggering other system actions. I/O servers do not perform any algorithmic tasks with respect to the input or output data they handle.

● Transformation servers perform the transformation from input data to output data -- possibly updating internal state data. Transformation servers are solely concerned with system algorithms. A transformation server never has to worry about where data is coming from or where it is going. All operations of a transformation server are sequential and deterministic. Transformation servers are not concerned with data representation, external interfaces, or sequence of actions.

● Data flow managers obtain input data from the I/O servers, invoke the transformation servers (which transform input data into output data), and deliver output data to the I/O servers. Data flow managers are concerned solely with data flow and sequence of actions. Data flow managers know what actions the system needs to perform and in what sequence, but they are not concerned with the details of those actions: the algorithms and external (hardware) interfaces. While data flow managers own the internal state data, they are not concerned with its representation.

See Figure 1 for an example of a very simple architectural design.

Page 64: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Figure 1: A Simple Architectural Design

Translating Use-Case Models

Use-case modeling is one of the most compelling and successful techniques for finding and capturing functional software requirements. The requirements workflow of the Rational Unified Process describes the use-case modeling technique at length. As a brief summary, we can say that use cases specify the externally visible behavior of a software system: how a system interacts with external entities (called actors). Figure 2 is an example of a simple use case as it might appear in a use-case model.

Page 65: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Figure 2: A Simple Use Case

A use case is realized by three kinds of analysis classes: boundary classes, control classes, and entity classes. The Analysis & Design Workflow of the Rational Unified Process explains at length how to find analysis classes. We can summarize by saying that actor/system interactions give rise to boundary classes, the use-case behavior gives rise to a control class, and stored information gives rise to entity classes. The use case depicted in Figure 2, for example, might be realized by boundary classes, control classes, and entity classes, as shown in Figure 3.

Figure 3: A Simple Analysis Class Diagram

As we will see below, there is a simple correspondence between the analysis classes/objects and the modeling elements of the Universal Design Pattern. A software architect can therefore easily refine the analysis model and create an architectural design model according to the rules of the Universal Design Pattern.

Of course, requirements analysis with use cases is not a prerequisite for applying the Universal Design Pattern. There are many ways a software developer can discover data entities, I/O servers, transformation servers, and data flow managers when studying the software requirements. Often, considering external interfaces and the data transformations that are needed to produce output data from the input data will lead a developer to create a reasonable architectural design that obeys the rules of the Universal Design Pattern.

Boundary Classes

A boundary class embodies several aspects of the interface with which it is associated, including the physical I/O device, the available input and output operations, the data the system produces or retrieves for processing, and the layout of the information presented to, or requested from, the actor. The Universal Design Pattern requires that an architect

Page 66: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

separate these aspects in the design model.

Each boundary class translates to a set of design elements:

● An I/O Server. The I/O server encapsulates the driver that controls the interface hardware. Often the driver does not interact with the hardware directly (though it can), but with lower level I/O servers or operating system drivers. The I/O server also provides the abstract input and output operations other design elements of the system use. Finally, the I/O server encapsulates the layout of the external interface and its components.

● Many input data entities and output data entities. The data entities embody the logical structure and operations of the data objects other design elements exchange with the I/O server. The input and output data entities are the parameters of the input and output operations of the I/O server. Note that the structure of the input and output data entities is independent of the layout of the external interface.

Upon refining the design of the I/O server, a software developer will often discover other data entities and subordinate I/O servers that are needed for its implementation. Also, a single I/O server often implements many or all boundary classes that are associated with the same external interface. See Figure 4 for an example of a design model with an I/O server derived from the RadarReceiverInterface and the RadarTransmitterInterface boundary classes.

Figure 4: Refined Class Diagram of RadarReceiverInterface

Control Classes

A control class represents the behavior of a use case, the sequence of events the software system responds to, and the sequence of actions the system performs while the use case is executed. If there are several use cases that describe communication with the same actor or through the

Page 67: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

same external interface, then the control classes of all these use cases can often be translated into a single modeling element: A Data Flow Manager.

Note, however, that a data flow manager never responds to an event; a data flow manager always acts. The flow of events of a use case is usually described in reactive terms. The software developer has to translate that description into a sequence of pure actions performed by the data flow manager. Consider, for example, the following use-case outline, which is realized by the RadarLoopControl class:

The UpdateTrackerPosition use case is executed every 20 milliseconds.

1. The use case starts when the radar receiver sends a telegram with target position data to the system.

2. The system retrieves the filter state and computes the new tracker position. The system then updates the filter state.

3. The system sends the new tracker position to the tracker motor.

In skeletal Ada, the corresponding data flow manager would look like this:

TASK Tracking_Loop_Manager IS State : Filter_State;BEGIN LOOP Radar_Io_Server.wait_for_20ms_telegram(20ms_telegram); Filter.compute_new_position (State, 20ms_telegram, new_tracker_position); Tracker_Io_Server.set_position(new_tracker_position); END LOOP;END;

Note how the data flow manager is actively waiting for and retrieving the input data provided by the I/O server (the Radar_Io_Server in this example). The I/O server does not invoke the data flow manager or send data to the data flow manager.

Entity Classes

Entity classes represent data the system stores across several executions of a use case. An entity class embodies several aspects of the data, namely its structure, its operations, and the means by which it is stored persistently in the system. The Universal Design Pattern requires that an architect separate these aspects in the design model. Furthermore, an entity class can translate to different sets of modeling elements, depending on the significance of the data the entity class represents. We

Page 68: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

mainly have to consider two cases:

1) If the entity class represents persistent data the system stores in an internal database, then the entity class translates to two modeling elements:

● A persistent data entity. The persistent data entity embodies the logical structure of the stored data objects. A persistent data entity typically has few operations. In particular, it should not have any operations that pertain to the means by which it is stored.

● A database I/O server. This I/O server embodies the database object that stores the data objects. The operations of the database I/O server are typically of the get and put type and take an in or out parameter of the persistent data entity above. The database I/O server encapsulates any internal structures (e.g., index tables) and functionality needed to store the data objects in the database.

See Figure 5 for an example of a design model with a database I/O server and a persistent data entity derived from the TacticalData entity class.

Figure 5: Refined Class Diagram of TacticalData

If the tactical data resides on some kind of external medium, then it obviously has to be represented by a database I/O server, because it is accessed through an external (hardware) interface. But even if the tactical data is kept in memory, it must be represented by a database I/O server to separate the data structure aspects from the storage/retrieval aspects.

2) If the entity class represents internal state data associated with a system algorithm, then the entity class translates into a Transformation Server. The transformation server embodies both the logical structure of the state data and the algorithm. The transformation server has a compute type operation that takes an input data entity and an output data entity as parameters. Note that in non-object-oriented programming languages, the internal state data object would also be a parameter that is passed in and out.

Page 69: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

See Figure 6 for an example of a design model with a transformation server derived from the FilterState entity class.

Figure 6: Refined Class Diagram of FilterState

An internal state data object is always owned by a single data flow manager. Although different data flow managers may use the same transformation server (which is a class), they maintain independent state data objects (which are instances of the transformation server class).

The Four Realms of Software

The Universal Design Pattern divides a system's software into four separate realms at the highest level of design:

● The realm of data entities (representing the problem domain)

● The realm of transformation servers (representing the domain-dependent solution space)

● The realm of I/O servers (representing the target environment)

● The realm of data flow managers (representing the target-dependent solution space)

These four realms are quite distinct from one another, and the classes/objects of each realm are likely to be modeled and implemented in different ways. Note that data entities and transformation servers usually constitute the bulk of a software system.

The Realm of Data Entities

The realm of data entities represents the problem domain. Most objects or concepts within the problem space are represented by data entities. Data entities are largely oblivious to the target environment of the software system. Together with transformation servers, data entities model the problem domain reality.

Page 70: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Each data entity is a class of a data objects, and its primitive operations are class operations. Data entities are often related to other data entities; for example:

● Data entities can be composed of other, more fundamental data entities.

● Data entities can be derived from more fundamental ancestor data entities.

● Primitive operations can be inherited from ancestor data entities.

Data entities represent all the input data, output data, and persistent data a software system handles. Any number of data objects (instances of data entities) may exist in a software system at any given time. Data objects come and go; they are created and destroyed continually as the software executes. Data entities are entirely passive. Although other types of design elements pass them around and invoke their operations, they cannot truly interact with data entities.

Data entities are best modeled with an object-oriented design language like the Unified Modeling Language (UML). UML is very powerful and provides a broad range of features, however, and many of those features are unsuitable for data entity design. A software developer should observe the following restrictions:

● Message sequence diagrams, object interaction diagrams, and state machines have no place in data entity design. Remember, data entities are entirely passive; they do not exchange messages and do not have state (in fact, they are state).

● Primitive operations must not invoke global methods or otherwise interact with global objects. A primitive operation of a data entity may invoke other operations of that data entity, operations of its component data entities, or operations of its parameters, but nothing else. In particular, a primitive operation must not invoke operations of any I/O servers or transformation servers.

● Data entities (the classes) must not have global, static state.

The detailed structure of a data entity or its primitive operations need not be exposed in the top-level design of a software system. The top-level data entities usually correspond to the input data and output data of the software system, as well as persistent data identified during requirements modeling. A software developer always introduces new data entities while refining the top-level data entities and breaking them into smaller objects. Data entities can only be composed of other data entities, though. No I/O servers, transformation servers, or data flow managers ever emerge during the detailed design of a data entity.

Note that it is a mistake to start data entity design by identifying domain concepts. Data entities representing domain concepts will emerge eventually, as the design model is refined, but they should never be the starting point of architectural design.

Page 71: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

The Realm of Transformation Servers

The realm of transformation servers represents the domain-dependent solution space. Each transformation server represents a domain-specific algorithm pertinent to the problem a software system solves. Transformation servers are oblivious to the target environment. Together with the data entities, they model the problem domain reality.

Each transformation server implements an algorithm (sometimes several algorithms) that is sequential and deterministic. A transformation server is thus simply modeled as a class with operations implementing the algorithm. All the usual elements of procedural languages and structured design languages can be used to implement transformation server operations: loops, conditional statements, subroutines, parameters, recursion, etc.

You might think that transformation servers would functionally decompose a software system, but that is not the case. A transformation server is usually associated with a transformation state. The transformation state is internal data a transformation operation uses and updates whenever it is invoked. And the transformation state is the class with which a transformation operation is associated. For example, in C++:

class Filter { public: void compute_next_position (20msTelegram &input, TrackerPosition &output); private: FilterState state;};

In the procedural programming language C we would specify the transformation server class as follows:

struct FilterState {};void compute_next_position (FilterState *pstate, 20msTelegram *pinput, TrackerPosition *poutput);

Page 72: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Note that both code snippets specify a class! In the C example, the FilterState object is explicitly passed as an in-out parameter when the transformation server operation is invoked. In the C++ example, the FilterState object is passed implicitly as the object to which the compute_next_position operation belongs.

The following guidelines apply to transformation server design and implementation:

● A transformation server (the class) must not have any global, static state.

● All data objects a transformation server uses must be passed as parameters to its transformation operation -- either explicitly or implicitly.

● The algorithm implemented by a transformation server must depend only on its parameters.

● A transformation server may invoke the operations of other transformation servers or primitive operations of its parameters. But a transformation server must not invoke I/O server operations.

The restrictions above ensure that all transformation server operations are reentrant.

Any object-oriented or structured design language is adequate to model transformation servers. After all, a transformation server is nothing more than an algorithm; therefore, any design language able to describe algorithms will do.

The detailed structure of a transformation server does not need to be exposed in the top-level design. The algorithm of a transformation server can often be broken into discrete steps. Each of these steps can itself be represented as a subordinate transformation server. Some of the steps may already be implemented by an unrelated transformation server, and can be invoked. As the software developer refines the design of a transformation server, he will normally introduce new transformation servers and new data entities. The data entities emerge as a by-product, representing parameters the transformation server passes to subordinate transformation servers. I/O servers and data flow managers never emerge during the detailed design of a transformation server.

It's important to note that transformation servers and data entities are conceptually very different design elements. Although they are both modeled and implemented as classes, they have different responsibilities within a software system. Data entities model domain objects and concepts; transformation servers model algorithms pertinent to the software solution.

The Realm of I/O Servers

The realm of I/O servers represents the target environment. Each I/O server encapsulates an external (hardware) interface or internal database

Page 73: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

with which the software system interacts. I/O servers are concerned only with the target environment of the software system, not with problem domain objects or concepts. Together with the data flow managers, the I/O servers model the target environment reality.

I/O servers are best characterized as passive-reactive. They are passive until they either receive an external signal or a client invokes an input/output operation. Once activated, however, an I/O server autonomously performs all the necessary steps to either serve the external (hardware) interface it encapsulates or to satisfy the client request. Note that the only clients of an I/O server are data flow managers.

An I/O server can be fairly complex and expensive to implement, depending on the nature of the external interface it encapsulates. Imagine, for example, an I/O server representing a CORBA interface or implementing a TCP/IP stack. However, the complexity of an I/O server should be related solely to the input/output operations it performs and the external (hardware) interface it services. I/O servers should not be concerned with any computations related to the data they receive, transmit, or store.

The following guidelines apply to I/O server design and implementation:

● An I/O server may invoke the operations of another I/O server but not of a transformation server.

● An I/O server should use operations of input or output data entities only to create or destroy them.

● To access the external interface, an I/O server should use its own thread of control.

● When a client delivers output data to an I/O server, the data should always be copied into a local buffer.

● When a client obtains input data from an I/O server, the data should always be copied out of a local buffer.

An I/O server can be modeled as a class, although I/O servers are usually solitary objects. The input and output operations an I/O server provides often depend only on the target environment, not on the problem domain. I/O servers typically have get, put, and wait type operations. Here is an example in Ada:

Page 74: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

PACKAGE Radar_Io_Server IS PROCEDURE wait_for_20ms_telegram (output : OUT Telegram20ms); PROCEDURE put_control_telegram (input : IN ControlTelegram);END;

No design language is entirely adequate for modeling an I/O server. Skeletal, high-level language code is therefore often the best choice. The capsules of UML/RT capture the passive-reactive nature of I/O servers very well. However, capsule-interaction by signals and messages is inadequate to describe some of the interactions between I/O servers and data flow managers -- especially waiting for input data.

The detailed structure of an I/O server does not need to be exposed in the top-level design. Higher-level I/O servers often rely on subordinate, lower-level I/O servers to perform the required input/output operations. Sometimes a lower-level external interface is already encapsulated by an I/O server, and a higher-level I/O server can invoke operations of the latter. As the software developer refines the design of an I/O server, he will normally introduce new I/O servers and new data entities. Data entities emerge as a by-product, representing parameters the I/O server passes to subordinate I/O servers. Transformation servers and data flow managers never emerge during the detailed design of an I/O server.

The Realm of Data Flow Managers

The realm of data flow managers represents the target-dependent solution space. Each data flow manager represents a sequence of actions the system performs. Data flow managers are largely oblivious to the problem domain -- domain concepts and domain-specific algorithms. Together with the I/O servers the data flow managers model the target environment reality.

The data flow managers are the active elements of a system. Each data flow manager implements an independent thread of control. In a system that is implemented in Ada, the data flow managers would most likely be Ada tasks. Data flow managers are the means by which a software developer implements concurrency within a software system. All data flow managers of a software system perform their actions concurrently.

The following guidelines apply to data flow manager design and implementation:

● A data flow managers always acts on I/O servers and transformation servers; it never reacts.

● Data flow managers have no interface. It is therefore not possible

Page 75: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

for other design elements to invoke a data flow manager or to send messages to a data flow manager.

● A data flow manager performs its actions as quickly as possible; data flow managers are not controlled by a clock. When a data flow manager retrieves input data from an I/O server, it may have to wait for the input data to become available. But because waiting for input is considered to be an action, a data flow manager still performs its actions as quickly as possible.

● Data flow managers may explicitly synchronize with each other, though explicit synchronization should be avoided if at all possible. Explicit synchronization is considered to be an action.

● A data flow manager does not have to perform all its actions sequentially. It can perform some actions conditionally or repeatedly.

● A data flow manager should never be concerned with the details of the actions it performs. A data flow manager knows what the software does and in what sequence, but not how it is being done.

● The actions of a data flow manager must be simple, such as invoking operations of an I/O server or a transformation server. Algorithms or input/output details should be implemented not in the data flow manager itself but in the operation it invokes.

● A data flow manager may temporarily store data entities. It must not, however, use any of the operations of those data entities.

In skeletal Ada, for example, a data flow manager might look like this:

TASK Tracking_Loop_Manager IS State : Filter_State;BEGIN LOOP Radar_Io_Server.wait_for_20ms_telegram(20ms_telegram); IF Filter.is_new_asm(State, 20ms_telegram) THEN Filter.start_asm_tracking(State); Console_Io_Server.sound_asm_alarm; END IF; Filter.compute_new_position (State, 20ms_telegram, new_tracker_position); Tracker_Io_Server.set_position(new_tracker_position); END LOOP;END;

Data flow managers can be modeled with a variety of modeling languages,

Page 76: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

although -- just as for modeling I/O servers -- none is entirely adequate. A software developer can easily use UML's activity state diagrams or skeletal high-level language code to describe the sequence of actions a data flow manager performs. Note that a UML/RT capsule cannot represent a data flow manager because data flow managers are active, and not reactive, design elements.

All data flow managers of a software system must be present in the top-level design. A data flow manager is not composed of any subordinate objects and cannot be broken into subordinate design elements, other than the actions it performs. As far as data flow managers are concerned, the design of a software system is flat.

This forces a software architect to provide a detailed model of the operationally complicated interactions within a software system -- interactions between the data flow managers and other design elements -- at the highest level of design. The operationally simple bulk of the software system however -- I/O servers, transformation servers, and data entities -- does not need to be detailed at a high level. Although I/O servers may interact with other I/O servers and transformation servers may interact with other transformation servers, these interactions are simple because they are sequential and hierarchical.

Model Each Realm Separately

As we have seen, it is quite easy to obtain initial design elements from the use-case model of a software system. Each design element belongs to one, and only one, realm of software. It is either a data entity, a transformation server, an I/O server, or a data flow manager. A software developer can refine and relate those design elements according to the rules of the Universal Design Pattern (as provided above) to obtain a quality architectural design.

It is imperative though, that a software developer does not introduce design elements that belong to multiple realms! Similarly, the general rule that a design element can only be composed of design elements of the same type (data entities being a by-product of decomposition) is absolutely essential.

Note that this is the way we design systems in other industries. When designing a car, for example, we would never introduce arbitrary design elements -- say a front-end and a back-end -- and then discover later that the front-end contains a driver's seat, spark plugs, and tires. No; we would divide the elements of the car into separate realms at the highest level of design -- such as passenger comfort, power train, and rolling gear -- and then model the design elements of each realm separately.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Page 77: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Copyright Rational Software 2001 | Privacy/Legal Information

Page 78: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Common Misconceptions about Software Architecture

by Philippe KruchtenRational FellowRational Software Canada

References to architecture are everywhere: in every article, in every ad. And we take this word for granted. We all seem to understand what it means. But there isn't any well-accepted definition of software architecture. Are we all understanding the same thing? We gladly accept that software architecture is the design, the structure, or the infrastructure. Many ideas are floating around concerning why and how you design or acquire an architecture and who does it. In this article I review some of these accepted ideas and show why, in my opinion, they may be misconceptions.

"Architecture is design."

Yes, architecture is design. It is about making the difficult choices on how the system will be implemented. It is not just the "what."

But not all design is architecture. We see this word applied more and more frequently to any form and aspect of design. A few years ago, Mary Shaw pleaded: "Do not dilute the meaning of the term architecture by applying it to everything in sight." Unfortunately things have become worse, not better.

Architecture is one aspect of the design, focusing on the major elements -- the elements that are structurally important, but also those that have a more lasting impact on the performance, reliability, cost, and adaptability of the system. Architecting is choosing the small set of mechanisms, patterns, and styles that are going to permeate the rest of the design and give it its integrity. Architecture is the tool that allows us to master complexity. It cannot be the whole design. It has to limit itself to a certain level of abstraction but still be concrete enough to draw definite

jprince
http://www.therationaledge.com/content/apr_01/m_misconceptions_pk.html
jprince
Copyright Rational Software 2001
Page 79: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

conclusions. It is not just "high-level design."

What shall the architect focus on, then? There is no universal answer. For any given project, a decision needs to be made about what is architecturally significant so that we can draw that thin and elusive line between architecture and the rest of the design activities.

"Architecture is infrastructure."

Yes, the infrastructure is an integral and important part of the architecture: It is the foundation. Choices of platform, operating systems, middleware, database, and so on, are major architectural choices.

But there is far more to architecture than just the infrastructure. The architects have to consider the whole system, including all applications; otherwise an overly narrow view of what architecture is may lead to a very nice infrastructure, but the wrong infrastructure for the problem at hand. Time and time again, I run into this in organizations in which an architecture team is working solely on infrastructure, largely ignorant of the problem domain and the application software -- which they consider to be outside of the architecture. "Oh, you mean the application. That's what the people in the other building do."

"Architecture is [insert favorite technology here]."

"The network is the architecture. The database is the architecture. The transaction server is the architecture. The GUI is the architecture. CORBA is the architecture. This standard is the architecture..." This is a special case of the previous point. Yes, many of these aspects are part of the architecture, but the architecture cannot be restricted to one aspect only.

Architecture is more than just a "technology watch," but I see many organizations in which the major role of the software architect seems to be experimenting with interesting new technologies. Often this is also the consequence of having architects who all come from one single specialty: for example, an architecture team comprised solely of data engineers. I was told recently: "We do not need anybody to work on architecture; our company has standardized on three-tier client-server architecture."

"Architecture is the work of a single architect."

"A great architecture is the work of a single architect. " Fred Brooks wrote this in 1975.1 Granted, there are some great examples of this: F. Brooks himself, G. Amdahl, or A. Kay of SmallTalk fame.2 But in practice, geniuses are rare, and in many organizations good architectures are most often the work of a small group of people working as a team.

The architecture team acts as a team as defined by Katzenbach and Smith3:

A team is a small number of people with complementary skills who are committed to a common purpose, performance goals,

Page 80: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

and approach for which they hold themselves mutually accountable.

So the architecture team is not a committee, meeting every Tuesday at 15:00; it is not a problem clearinghouse; it is not an ivory tower, a resting place for successful but tired designers.

Especially for complex systems, a well-chosen architecture team is needed to bring the right mix of domain and software engineering experience, and the right mix of design specialties (hence minimizing the [my favorite architecture] effect mentioned above). The architecture team needs significant flexibility in composition and structure. Some of the architects may become the architects of subsystems, once those have been defined. However, it is important to have a clearly designated team leader: a lead software architect who can drive the effort, arbitrate, resolve conflicts, and bring timely closure to project tasks.

"Architecture is flat."

When I ask to see an architectural description, I often notice that people have tried very hard to make it flat -- two- or even one-dimensional. They attempted to convey many different aspects of architecture using only one kind of concept, one family of components and connectors. They end up trying to show the many complex concerns of multiple stakeholders using one kind of diagram or blueprint. The description, however, is incomplete. A flat language cannot capture or describe the intricacies of a complex system. And I have the same concerns about some architectural description languages.

Architecture is a complex beast; it is many things to many different stakeholders. Using a single blueprint to represent architecture results in an unintelligible semantic mess. Like building architects who have floor plans, elevations, electrical cabling diagrams, and so on, we need multiple blueprints to address different concerns, and to express the separate but interdependent structures that exist in an architecture. As a solution, I had proposed an architectural representation based on four main views plus one,4 and others have proposed similar approaches, leading recently to an IEEE Standard.5

"Architecture is structure."

Again, the answer is "yes, but no." Yes, architecture must describe the structure of the system, and there are multiple structures intertwined in the architecture as we just saw. But no, there is more to architecture than just hierarchical decomposition, layering, or pipelining.

Architecture must also deal with dynamic issues that are beyond (or across) structure and only vaguely visible at the interfaces: the flows of messages or events, the protocols, the state of machines, the creation of threads.

Architecture must also address the "fit": How will the ultimate system fit in its contexts? This includes both the operational context (Does it meet the

Page 81: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

needs of its users?) and the developmental context (Is it easy to build? Is it simply feasible with the resources at hand?). These issues go beyond mere structure, although some try to reduce them to that.

"System architecture precedes software architecture."

Yes, you need to define and structure the system as a whole before you can say a thing about software. But far too often I still see software-intensive systems being first completely designed by system engineers, and when all major system-level decisions have been made, the system engineers open the door and say: "Now you software types, you can come in. We'll show you where the computing nodes are, and you can sprinkle your magic."

This approach usually results in stovepipe systems: islands of software isolated on various computers. A large amount of effort is then dedicated to defining the interfaces between the various software subsystems. Most opportunity for software reuse across the system is nipped in the bud because the software architecture was not developed in conjunction with the system architecture, and the final system is not resilient to changes in the underlying hardware. And when it is discovered that an error or miscalculation was made at the system level, it becomes the job of the software types to fix it in software. This problem is even worse when the development of different software subsystems is contracted to different organizations. In long-term development efforts, it is likely that the underlying system will change during software development -- the hardware that was bid at the beginning of a project may not be even manufactured at the time of its delivery.

For software-intensive systems, system architecture and software architecture must proceed simultaneously. Their activities must be interleaved, feeding each other with solutions, opportunities, and constraints. The architectures must be almost indistinguishable: The system architecture is -- and includes -- the software architecture.

"Architecture cannot be measured or validated."

Architecture is not just a whiteboard exercise that results in a few interconnected boxes and is then labeled a high-level design. There are many aspects you can validate by inspection, systematic analysis, simulation, or modelization. And I am all for using them appropriately. But in the end, all these techniques may only point to something in the proposed architecture that may not work. You can only inspect, model, and analyze what is known and described; architectural flaws most often come from the unknown.

I have come to believe that to validate that an architecture will work, you have to implement it and try it. Winston Royce and Walker Royce wrote:6

... thus software architects have no irrefutable first principles; without available theory, their starting point must be some form of experimentation. Experimentation via simulation is a

Page 82: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

possibility, but it rarely works for software, primarily because high-fidelity, easy to build simulations are only possible to build if the underlying physics is known or a mathematical model can be assumed -- exactly what is missing in the first place. The remaining experimental alternative is prototyping, where the word prototype is used with a very narrow, specific meaning; i.e., building something with the same design as is intended for the final deliverable product.

Building a skeletal architecture in early iterations is a method of architecture validation with many benefits. It helps us to:

● Mitigate technical risks by trying proposed solutions early in the design process.

● Progress, in both the problem domain and the solution domain.

● Reduce integration risks.

● Jump-start the testing effort by providing something to integrate and test early in the development lifecycle.

● Gain experience with tools and people so that both operate smoothly when we need them later.

● Set up appropriate expectations in terms of functionality, performance, completion date, and cost, so that there are no surprises late in the project.

A core tenet of the Rational Unified Process is an early focus on the design and validation of a baseline architecture in its elaboration phase.

"Architecture is a science."

Not yet. Scientific, analytical methods are hard to apply to anything that is not ridiculously small, as the Royces said. There is no real proof that the architecture will work, other than prototyping for some aspects, because there are few quantitative, objective criteria.

Usually, time is of the essence when designing system architectures. The architects have no latitude to systematically study all possible solution paths and their combinations in order to come up with the optimal solution; they must rapidly make decisions to allow work to proceed. There is no point in coming up with the ideal solution after the battle is lost. I often describe the life of a software architect as a long and rapid succession of suboptimal design decisions taken partly in the dark. It is not a static function that we are optimizing anyway. Neither the constraints nor any parts of the problem are static enough for long enough to approach anything "optimal." Architecture is not about the optimal, or ideal; it is about the adequate, or satisfactory.

The nasty consequence is that when one looks at the architecture of the system after it has been implemented, it is far too easy to criticize every design decision that was made. With history on your side, you know much more than what the architects knew at the time they were making the

Page 83: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

decisions, both about technology and the requirements. "Why didn't you use CORBA? Or this tool?" the ex post facto critics will ask, but at the time the decision needed to be made, not enough information existed to make such a choice. In order for the project to progress, plans must be laid, decisions must be made, and the team must move on.

Will architecture ever become a science? Not very rapidly. It needs to first reach the level of engineering. I know computer scientists who will cringe when reading this.

"Architecture is an art."

Whoa. Let's not fool ourselves. Some architects may like to portray themselves as magicians: "Hire me. I'll look at your project, retreat onto my mountain, levitate for a while in trance, then come down with the solution." The artistic, creative part of software architecture is usually very small. Most of what architects do is copy solutions that they know worked in other similar circumstances, and assemble them in different forms and combinations, with modest incremental improvements.

It is possible to describe an architectural process that has precise steps and prescribed artifacts, and that takes advantage of heuristics and patterns that are starting to be better understood. We are attempting to do this in a modest way in the Rational Unified Process.7 But our description also emphasizes the importance of having a team of architects with a broad spectrum of experience.

"These are the top ten misconceptions."

Maybe the biggest misconception of all is to think that we know enough today about software architecture for an individual like me to stand up and declare that these are misconceptions! My certainties may be misconceptions in your eyes, and vice versa. Whatever you believe, however, at least consider these ten topics as important issues that deserve some attention, or potential traps into which I hope you will not fall.

Acknowledgments

I would like to thanks my friends and colleagues Grady Booch, Dean Leffingwell, Kurt Bittner, Rich Hilliard, Walker Royce, Rick Kazman, and Jaswinder Madhur for their insightful reviews of an earlier draft of this article.

References

F. P. Brooks, Jr. The Mythical Man-Month. Reading MA: Addison-Wesley, 1975.

D. Shasha and C. Lazere, Out of Their Minds--The Lives and Discoveries of 15 Great Computer Scientists. New York: Copernicus, 1995.

Page 84: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

J. Katzenbach and D. Smith, The Wisdom of Teams. New York: HarperBusiness, 1993.

P. Kruchten, "The 4+1 View Model of Architecture," IEEE Software, 12 (6), November 1995, IEEE, pp. 42-50.

IEEE Standard 1471:2000, Recommended Practice on Architectural Description.

W. E. Royce and W. Royce, "Software Architecture: Integrating Process and Technology," Quest, 14 (1), 1991, TRW, pp. 2-15.

Rational Software Corp. The Rational Unified Process, version 2001, 2001.

E. Rechtin and Mark Maier, The Art of Systems Architecting, CRC Books.

1 F. P. Brooks, Jr., The Mythical Man-Month, Addison-Wesley, Reading MA, 1975.

2 D. Shasha and C. Lazere, Out of Their Minds--The Lives and Discoveries of 15 Great Computer Scientists, Copernicus, New York, 1995.

3 J. Katzenbach and D. Smith, The Wisdom of Teams. New York: HarperBusiness, 1993.

4 P. Kruchten, "The 4+1 View Model of Architecture," IEEE Software, 12 (6), November 1995, IEEE, pp. 42-50.

5 IEEE Standard 1471:2000, Recommended Practice on Architectural Description.

6 W. E. Royce, and W. Royce, "Software Architecture: Integrating Process and Technology," Quest, 14 (1), 1991, TRW, pp. 2-15.

7 Rational Software Corp. The Rational Unified Process, version 2001, 2001.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

Page 85: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

A Process for Selecting Automated Testing Tools

by John WatkinsUK Technical Support Rational Software

Let's have a show of hands: How many of you feel you have all the time, money, and resources you need to test your software? How many of you are relaxing your requirements for software quality? Does anyone out there have unlimited time for testing? And who feels confident they can develop defect-free software? All those who put their hands up can stop reading this article now!

As for the rest of us -- the majority of folks who develop software -- we have to face the fact that we are under intense pressure to deliver software of ever-increasing quality within ever-shorter development and testing timescales. What can we do?

First, we can leverage improvements in the management of testing and the testing process to make testing as effective and efficient as possible. We can also use automated software testing tools to improve quality and cut back on the time, effort, and costs involved in testing. But to identify and purchase the right testing tools for our respective organizations, we have to expend precious time, money, and staff resources. How can we afford to do this when we're already so painfully short on these things?

Let me assure you: You are not the first or only software developer to face this dilemma, and the situation is not hopeless. This article describes a process you can use to simplify the selection of an automated tool that matches your own particular testing requirements.1 The process is based on a number of sources, including work I have conducted on behalf of numerous clients as well as a vast body of feedback from organizations

jprince
http://www.therationaledge.com/content/apr_01/t_testtools_jw.html
jprince
Copyright Rational Software 2001
Page 86: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

engaged in evaluating software testing tools. Briefly, we will discuss:

● Whether you really need an automated testing tool

● Why it is necessary to perform a formal evaluation

● The process of identifying and documenting your testing requirements

● How to research the market and generate a short-list of tools

● Why you should invite suppliers in for presentations

● The role of product evaluation in selecting the right tool

● Post-evaluation activities

Figure 1 provides a "road map" of this process, showing the stakeholders, the activities that need to be performed, and the artifacts that need to be created. The notation follows Philippe Kruchten's specifications for the Rational Unified Process.2

Figure 1Click here to view full size graphic.

Do You Really Need An Automated Testing Tool?

Undoubtedly, you have some issues with your current approach to testing, but do you really need a testing tool? Before rushing out to buy one, you should first consider the following:

● Tools cost money -- money that might be spent more effectively. For example, could an investment in training help address your testing problems?

● Are you managing your approach to testing efficiently? Could better management techniques be a solution?

● Could adopting an effective development and testing process (such as the one outlined in the Rational Unified Process) be of benefit?

Page 87: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

● If you have a short-term or infrequent requirement for testing, would it be more cost-effective to outsource the testing?

Another caution: Do not even think of rushing out to buy a testing tool if your project is in crisis. Wait until you can devote the time and effort required to introduce a new tool properly -- and expect to use it a few times before you see any real productivity gains.3 There are genuine benefits to be gained by using these tools -- but only through a planned and managed introduction, and through continued use and re-use of the test suite that you will develop.

So when should you use an automated testing tool? Such products are particularly appropriate if you have:

● Frequent builds and releases of the software you are testing

● A requirement for thorough regression testing, and particularly for business critical, safety critical, and secure or confidential software systems

● Software involving complex graphical user interfaces

● A requirement for rigorous, thorough, and repeatable testing

● A need to deliver the software across many different platforms

● A need to reduce timescales, effort, and cost

● A requirement to perform more testing within shorter timeframes

If you have one or more of the above requirements, then you should investigate what benefits you could gain from testing tool use.

Why Follow a Formal Selection Process?

Most suppliers will tell you that their tool is clearly the best, is incredibly easy to use, and will of course solve all your testing problems. The reality is that you have a unique set of requirements, and you must make sure that any tool you select satisfies those requirements.

Furthermore, many suppliers will insist that you should buy their tool because it is the market-leading product and therefore must be the best. Be careful not to rely on such claims. What is important is that the tool match your specific testing requirements. EXE Magazine recently commissioned a study that showed that the top five issues for tool purchasers were: reliability of the tool, good match to requirements, adequate performance, ease of use, and good documentation. Market leadership was ranked as the least important issue, in fifteenth place!

The bottom line is that adopting a tool requires a significant investment in time, effort, and money. Before committing your organization and its resources, you should assure yourself that the tool you choose will match your testing requirements. To do this, you should adopt a formal process for identifying the right tool and be able to demonstrate its rigor to your managers and colleagues.

Page 88: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Identify and Document Your Testing Tool Requirements

During the process of determining whether you need a tool, you must consider your specific testing requirements for such a tool. For example:

● The tool must support functional testing and regression testing

● Process support must be available for the tool

● The tool must seamlessly integrate with other development tools

Make sure that you document your specific requirements; they will be used later. As you compile your set of requirements, try to determine just how important each one is and assign it a weight to indicate its significance. Most organizations use a simple, three-category approach: "Essential," "Important," and "Desirable."

Maintain the requirements information as a live document; you are almost certain to add requirements as the evaluation process continues and to relax or strengthen the weightings of existing requirements.4

All right. Armed with your documented testing tool requirements, you are now ready to do some market research.

Conduct Market Research and Compile a Short-List of Tools

The next task in the evaluation process is to identify those testing tools that most closely match your requirements. This is a two-step activity:

● First, identify a collection of candidate tools that loosely match your high-level, or "Essential," requirements.

● Second, review the candidate tools more rigorously to reject those products that fail to match your "Essential" and "Important" requirements. This may involve contacting the supplier, obtaining product brochures, and visiting the supplier Web site.

To identify the candidate tools, you can use many sources of information, including:

● Testing trade magazines (such as The Professional Tester)

● Special interest group meetings (such as the British Computer Society Specialist Interest Group in Software Testing [BCS SIGiST] group)

● Testing exhibitions, tools fairs, and conferences (such as QBIT's Testing Week)

● Analyst publications (such as those produced by IDC, Gartner, Ovum, and other consulting firms)

Page 89: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

● The Web

The output from this task will be a short-list of tools and contact details for their suppliers. Ultimately, you should narrow your list to just two tools, which you can then investigate in greater detail.

Invite Suppliers to Give Presentations

Once you have your short-list, it is time to determine which tool best matches your requirements as a prelude to formally evaluating that tool.

A particularly effective strategy for making this determination is to ask suppliers to do presentations for the tools on your short list. Contact the suppliers and outline your requirements. Or consider giving them a copy of the requirements; trustworthy companies will quickly let you know whether there is a good match and thus avoid wasting both your time and theirs.

Propose that the supplier organize a presentation of their tool based on your particular requirements, and consider providing them with a copy or sample of your application so they can use it -- rather than some demo application -- for their presentation. Then, prepare for the presentation by reviewing your requirements and formulating questions for the supplier. It may also be beneficial to send the supplier an agenda that describes what you expect to see.

During the presentation, make sure you take notes documenting the progress of the event, the answers to questions you ask, and any further questions that are raised. Do not be afraid to press the suppliers if they seem to be skipping over some aspect of the tool or fail to answer any of your questions adequately. Their reticence may be quite innocent, but it may also be a tip-off to a weakness or limitation of the tool. Finally, remember that you are in charge, but be a good host, too: you may end up working with these people if you purchase their tool!

If you have short-listed two suppliers, it can be very effective to see them both on the same day (one in the morning and one in the afternoon). Plan to review results immediately after the presentations, when both are fresh in your mind. Evaluate how well the tools met your requirements, and determine if you have further questions or need clarification on anything from the supplier. If you do have further questions, then document them and provide them to the supplier.

When you get the answers to your questions, you should have enough information to select one of the short-listed tools for formal evaluation. Or, if your organization has sufficient time, resources, and funds, you may want to conduct a formal evaluation of both tools on your list.

Formally Evaluate the Tool of Choice

After selecting the tool you believe most closely matches your requirements, you will need to perform a formal evaluation of the product

Page 90: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

to demonstrate that it will be able to satisfy your requirements in practice (that is, in your test environment and with your software). This exercise is often referred to as a "Proof of Concept."

Contact the supplier and ask for an evaluation copy of the tool. Most suppliers will provide you with a full working copy of the product, which is typically licensed for 30 days. Ask what support is available to you during the evaluation period. (Will the supplier provide assistance with installation? Whom should you contact for technical support? Is documentation available to help support your evaluation?)

Run the evaluation as a formal project, and ensure that adequate commitment is available from senior management in terms of timescales and resources (Figure 2 provides a typical project plan for an evaluation). Within your evaluation plan, include a number of milestones at which you will formally contact the supplier to review progress and address any issues raised by the evaluation.

Use your formal requirements document to evaluate the tool, taking into account the weightings for each requirement and identifying how well the tool satisfies that requirement.

After evaluating the tool, you should document the results in an Evaluation Report. This may be as simple as a checklist with the requirements, their weightings, and the evaluation score, or as formal as a written report. Consider providing the supplier with a copy of the report so that they can review your results. If you have misunderstood some aspect of the tool and assigned it a low score with respect to an associated requirement, then the supplier will be able to advise you of the misconception and explain how the requirements can be satisfied.

In addition to an Evaluation Report, you may also need to produce a Business Case document for senior management containing recommendations on the acquisition of the tool as well as return on investment calculations.5

Once you complete the evaluation, you should review the results and decide on next steps. If the evaluation was satisfactory, then you will need to consider the best way to acquire the product.

Plan Your Tool Purchase and Introduction

When you are ready to purchase the testing tool, there is a new set of issues to consider.

The first thing to determine is the number of licenses you need to buy to allow best use of the tool within your organization. You will also need to consider the issue of fixed licenses (typically tied to a particular workstation) versus floating licenses (typically installed on a server and issued on a first-come, first-served basis). Floating licenses will give you more flexibility in making the tool available but will almost certainly cost more than the equivalent in fixed licenses. In determining the number of licenses, remember that most suppliers offer volume discounts. Ask your

Page 91: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

supplier what the break points are so that you can take advantage of these discounts if possible.

Although in general terms, it makes good business sense to inquire about discounts, be careful not to let your pride in bargaining skills get in the way of obtaining a product that will benefit your organization. As incredible as it may seem, potential buyers frequently walk away from a purchase just because the supplier cannot reduce the purchase price by a trivial sum.

On the other hand, you should also be very wary of suppliers who will suddenly slash the price of their product as soon as they hear that one of their competitors is involved. You will have to work with the supplier following your purchase (perhaps for training and mentoring as well as ongoing support), so it is worth questioning the business ethics of any supplier who was perfectly happy to charge you X one day and then suddenly charge you half that amount just a few days later for exactly the same product (while presumably still making a profit). This behavior does not bode well for an ongoing business relationship, and suppliers who indulge in such activities are almost certain to find ways of recouping the discount at a later stage -- otherwise their business would be unsustainable.

The organizations that gain the greatest benefits from tool use plan and manage the introduction of new products into their development environments. Plan on doing some training before the installation, for example. Initial mentoring by both vendor representatives and a small group of well-trained employees can ensure correct and effective use of the tool throughout your testing team. Also, consider the potential benefits of planned consultancy visits throughout the testing project; think of these as "health checks" to ensure continued effective and efficient use of the tool. One organization that followed this route claims to have cut the initial time and effort they spent on testing by 35 percent.6

Finally, once the tool is up and running, persevere. You will not reap great benefits from your purchase unless you continue using it to support your testing activities. The more often you re-use the test scripts you create, the more savings you will realize in terms of time, effort, and cost.

1 For more information, you can also consult my whitepaper on this subject, which provides more detail on evaluating testing tools, an evaluation project plan, an evaluation criteria and scoring scheme, an evaluation checklist and report, and business case templates. You'll find a current version here [Word Doc, 237KB]. The Letters section of the May issue of The Rational Edge will publish a URL for the final version.

2 In Philippe Kruchten, The Rational Unified Process. Addison-Wesley, 1998.

3 One of the case studies in Automating Software Testing, by D. Graham, D. and M. Fewster (Reading, MA: Addison-Wesley, 1999) does describe an organization that benefited after just two uses of a software testing tool, but this is definitely an exception.

4 My white paper (see Footnote 1) provides comprehensive guidance and advice on documenting your requirements as well as a weighted scoring scheme to assist in your

Page 92: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

evaluation.

5 My white paper (see Footnote 1) provides re-usable templates for both the Evaluation Report and Business Case document.

6 See "Crown Management Systems Limited Case Study," Crown Management Systems and Rational Software, 1999. Available from [email protected].

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

Page 93: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Automated Deployment: Give Your Web Team a Good Night's Sleep

by Bernie CoyneProduct ManagerRational Suite ContentStudio (RSCS)

Let's imagine a company named Highside Engineering, an OEM parts manufacturer. Despite its long-time reputation as an innovator in the fiercely competitive world of motorcycle racing, Highside made its Web debut with caution. Its first site, launched several years ago, consisted of a few simple, static pages that informed visitors about the company and its products, listed the business address and toll-free telephone number, and included an e-mail address to write for follow-up information. But from these humble beginnings the site evolved rapidly into its current form: a complex, multi-server platform for e-business transactions that features a full-color product catalog, online ordering, order tracking, and customer

care. The site also offers membership privileges that include personalized content, a discussion forum, and an edgy industry newsletter.

As for many companies, the Web is now the focal point for diverse forms of interaction between Highside and its customers, suppliers, and other business partners. Presenting site visitors with a smooth, informative and compelling experience is nothing less than mission critical.

But Highside's beleaguered Web team is beginning to show the strain of managing an ever-growing stream of changes and enhancements to the site. And now a major product launch looms. The newly-appointed site manager and the Web development and testing teams, currently armed only with traditional content management and software configuration management tools, contend daily with a barrage of new PDFs and text from marketing, images and animations from the art department, updated Java class files, new configuration scripts -- you name it. Keeping all that new code and content properly synchronized is consuming everyone's workday, and most of the night.

jprince
http://www.therationaledge.com/content/apr_01/m_auto_bc.html
jprince
Copyright Rational Software 2001
Page 94: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

But oddly enough, coping with that steady stream of new files is beginning to feel like the least of the site manager's worries. Foremost on her mind is an even bigger problem: On launch day, all that new code and content must be simultaneously deployed across Highside's multi-server Web delivery platform. And the new site had better work -- and look -- great.

Automating Deployment with ContentStudio's NetDeploy

A key component of Rational Suite ContentStudio is Rational NetDeploy, an integrated, full-function, graphical tool for automating Web deployments. NetDeploy allows administrators to conveniently specify exactly what will be deployed, make sure it happens, and then ensure it works as expected.

With NetDeploy, you can:

● Deploy any specified code or content objects from one or more source servers to an arbitrary number of staging or production (target) servers, whatever their respective configurations. To optimize network utilization and minimize update time, you can choose to deploy all the code and content files that a deployment task defines, or only those that have changed since the last deployment.

● Schedule deployment tasks to occur automatically at regular intervals (daily, weekly, etc.) or in response to the completion of some other task, such as the approval of new product price information. Deployments can also take place on demand, at any time. Comprehensive

Rolling out an update can make even the most sophisticated Web team lose its cool. Many organizations are still struggling to find the right tools and processes to manage revisions efficiently. Without the ability to unify and coordinate the work of contributors and developers, synchronizing the code and content components of change requests can become a significant problem that threatens e-business success. The challenges involved in deploying approved changes to the site greatly compound these issues.

Last month's article by Rachael Rusting discussed these challenges from the perspective of code and content workflows, and explained how ContentStudio can help address them. Here we explore the next step in successful Web site management, by looking more closely at how these workflows come together at deployment time.

We all know that Web updates are both more frequent and less regular than conventional software updates. Web sites must evolve constantly in response to new business strategies, tactics, offerings, and technologies. For an e-business site, fresh content and state-of-the-art services are essential to maintaining customer satisfaction and strengthening business partner relationships. The more your organization relies on its e-business infrastructure, the more frequently updates need to happen -- perhaps daily, or even multiple times in one day. That's a far cry from the annual or semi-annual revision cycle for conventional software applications. And because the changes involved are usually incremental rather than "all at once," Web revamps often require more pre-rollout planning and expertise than

Page 95: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

tracking capabilities let you see at a glance what tasks are pending, executing, or completed.

● Check the audit trail that each deployment automatically spawns, which provides information about each file deployed. Audit logs also facilitate rollback to a previous version in the event of problems. For example, the site can easily be rolled back to yesterday's state (or any previous version) if someone releases a time-sensitive press announcement prematurely. Detailed change histories form the basis for continuous improvement.

● Automatically purge expired content or refresh it, either at predetermined intervals or in response to specified events, such as the approval of a more recent version.

● Deploy updated files securely, even through corporate firewalls, via your choice of HTTP or HTTPS. HTTPS provides for mutual authentication by the source and target servers to help thwart rogue deployments, as well as for SSL encryption of content to prevent tampering or unauthorized access during transmission.

Rational NetDeploy offers even greater value for Rational ClearCase customers because it enables you to deploy Web code and content you're already managing in ClearCase. (ClearCase is not required on target servers, however.)

non-Web software revisions.

The distribution of a Web update can also present challenges. Whereas conventional software updates most often flow to (and/or from) a single location, Web deployments frequently involve multiple, distributed servers and/or a multi-step deployment that moves from staging servers inside the firewall to production servers on the public Internet. Moreover, because changes take effect in real-time, the Web calls for precisely timed replication across the network, as opposed to more straightforward forms of distribution.

Balance all that against the need to present site visitors with a high-quality experience, plus requirements to approve new content and test new functionality, and you've got a recipe for long nights, short tempers, and potentially some very public examples of human error.

Options for Deployment Automation

Is there a way out of this dilemma? Yes! The key is to automate the deployment process, while linking deployment to the code and content approval workflows it depends on. As companies make the transition from static Web pages to an e-business infrastructure, most invest in some sort of automated revision and approval process for content, and in a (frequently separate) automated management system for code. But until recently, there have been few options for transitioning to an automated deployment model as a logical complement to these other automated processes.

Many organizations still rely on FTP (File Transfer Protocol) to update their Web sites, which barely constitutes automation. Basic FTP is fine for bulk file transfers but too crude to efficiently stage a multi-server, incremental Web update. FTP forces site managers to select individual files for copying, for

Page 96: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

instance. True, with custom programming you can extend the standard FTP utility so it can figure out the difference between the current deployment and the new one, thus permitting an incremental update. But such extensions are a pain to maintain. And the only readily available security for FTP transfers is the userid/password that FTP requires.

In UNIX environments, the RDIST or RSYNC utilities are frequently pressed into service for Web updates. These are powerful, flexible tools, but their cryptic, command line-driven interfaces are laborious to maintain and monitor.

In a world in which content and functionality can continue to change even as a reconstruction is in progress, these user-unfriendly deployment options are clearly outmoded. At Highside, and at any other company doing e-business today, what's needed is an easy-to-use, graphical solution that can schedule, track and control a multi-server Web deployment, automating each update to whatever degree is desirable while leaving administrators free to take control at any time. Updates, after all, are a critical process; they should be as fast, smooth, and accurate as the rest of your Web operations.

Logically, an update should simply be the completion step in the workflows that manage your Web code and content throughout its life cycle, from review and approval to expiration and replacement. Sound like a pipe dream? It's not. Rational Suite ContentStudio actually offers you that level of manageability and convenience today.

Automating Deployment with Rational Suite ContentStudio

Rational ContentStudio takes the stress out of deployment by enabling site managers to schedule, track, and control every aspect of the deployment and subsequent expiration of all Web code and content, all via a user-friendly, graphical interface. Let's revisit that site manager at Highside Engineering, and see how ContentStudio can help her cruise through a launch unscathed. We'll suppose that, instead of grappling with the content management problems we described above, that the Web team has been using ContentStudio to help them manage both code and content workflows for the launch. Here's how things might look as the appointed day draws closer...

Day in and day out, as content providers and Web developers make enhancements to the Highside site, approval workflows automatically post the correct code and content to Rational ClearCase Versioned Object Bases (VOBs) on the source server(s). Updates have become largely a matter of identifying where to put the new files; ContentStudio takes care of nearly everything else. Since ContentStudio automatically tracks which files will expire, which require an update, and what file versions to use, there's virtually no need for the site manager to manually select individual files for the update.

Page 97: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Any time the directory configuration of one of the staging servers changes, the site manager adjusts the target locations for some of the files to match the new directory hierarchy and folder names. When the company chooses to beef up its Web delivery platform by adding another production server, it's a simple matter to tell NetDeploy about the new server, and to modify the distribution of files in response to the new site topology. No arcane scripts to rewrite!

Highside's marketing director has requested that the new site be up and running well before the start of business tomorrow. So the site manager schedules the first phase of the update for 6 p.m. her time. That will give colleagues on the West Coast a few more hours to approve last-minute changes. The second phase of deployment, from staging to production, will take place after the staged site is tested.

At 6 p.m. Central Time, Rational NetDeploy automatically posts all new and modified files to the correct target servers. The Web team is now free to test the staging site in order to ensure that everything looks and works as expected.

At 8p.m., the Web team gives its OK, and an administrator initiates the second phase of deployment, which redeploys the files to production servers outside the firewall. A few minutes later, the latest information about Highside's new products is available live, worldwide.

When she arrives at work on "launch day," the site manager checks the audit log to verify that all deployment tasks completed successfully. (If a deployment task fails, then the state of the target server will be unchanged.) She is quickly able to confirm that all tasks executed successfully and that all the latest code and content is up and running. She and the Web team heave a collective sigh of relief, and everyone goes home on time that day.

Our story has a happy ending, but without the right tools, it might have been otherwise. Code and content might have gotten out of sync, possibly leading to the accidental deployment of unapproved files. The update might have faltered due to breaking scripts or unexpected configuration changes. The site might have been unavailable for an extended period while the Web team frantically performed manual processes.

This is the kind of confusion that Rational Suites ContentStudio is designed to help you avoid. It lets you manage all code, content, and site-related processes centrally, using a convenient, graphical interface. This gives administrators the top-down, big-picture perspective required to coordinate site activities. Yet it also provides the detailed reports and configuration options essential to fine-tuning each phase of an update or any other process, be it routine or exceptional. That means everyone on your Web team gets to sleep well the night before, and the night after, deployment!

Page 98: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information

Page 99: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Explaining the UML

by Joe MarascoSenior Vice PresidentRational Software

The one true test of your understanding of any concept comes when you must explain it to someone unskilled in the art. For those of us who deal daily in technology, the most maddening variant of this challenge is to transmit your understanding to "civilians," that is, other intelligent people who have little or no background in technology. The reason this is so

difficult is that you cannot fall back on technical jargon -- that shorthand that permits high-bandwidth communication with peers, while at the same time presenting a barrier to those unfamiliar with the lingo.

In fact, I have found that software people have difficulty explaining the nuances of their craft to other engineering professionals. On a recent trip to China, I needed to explain the UML (Unified Modeling Language) and its significance to technical managers who were not software professionals themselves. I had not anticipated that this would be a problem, but when I first mentioned "UML," I got nothing but blank stares. Before I could advance, I needed to get them grounded in UML. But how?

What follows is the ten-minute presentation I improvised and subsequently polished. When we're done, there's a neat irony that wraps up the tale.

What Is the UML, and Why Is It Important?

Let us begin with a simple example. If I write on the whiteboard:

1 + 1 =

anywhere in the world, people understand what I am trying to say. In fact, at this point, someone in the audience always volunteers "2"! When that happens, I complete the equation:

1 + 1 = 2

jprince
http://www.therationaledge.com/content/apr_01/k_uml_jm.html
jprince
Copyright Rational Software 2001
Page 100: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

and explain that, not only are we understood around the world, but also we usually get the right answer, too.

This is a good example of a universal notation, that is, the number system. People all over the world use it to communicate with each other. An English speaker can write it down, and a person speaking Mandarin in China can understand it.

Although this example seems trivial at first sight, it really does reveal an amazing fact: Numbers are universal, and certain symbols such as + and = have the same meaning all over the world.

The other really nice thing about this example is that anyone who has a first-grade education can understand and appreciate it. It has the unfortunate disadvantage of appearing to be more trivial than it really is.

A Second, Less Trivial, Example

At this point I acknowledge that perhaps this first example is a little too simple. So I then draw a triangle on the whiteboard that looks like this:

I then point out that the triangle takes on additional meaning when I complete the diagram with the following addition:

Now this triangle is unambiguously a right triangle, because the little square doohickey is a worldwide convention meaning "right angle." Furthermore, I can now label the sides of the triangle A, B, and C:

And, immediately, we can write down that

Page 101: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

A2 + B2 = C2

Now this has a few very endearing properties. First, it is once again an example of a universal notation. Right angles, right triangles, and the symbols representing them are the same all over the world; someone from ancient Egypt could in principle reason about right triangles with a modern Peruvian by drawing such diagrams. What's more, once the diagram for the right triangle has been written down, the relationship of A, B, and C is defined. A, B, and C can no longer have completely arbitrary values; once any two of them are specified, the third is determined as well. The diagram implies the Pythagorean Theorem. One could even go so far as to say that the diagram has some "semantics," that there is a well-understood relationship between the picture and the values implied by the letters.

What is truly amazing about this example is that anyone with a high school education can understand it. If the person has seen any geometry at all, they have seen triangles and right triangles, and if they remember anything at all from their geometry, it is good old Pythagoras.

So now we have a diagram with semantics, and we have moved up a level of abstraction at the "accessibility cost" of moving from the first grade to the high school freshman level of mathematics. Also, at this point, people are definitely intrigued as to where I am going with all this. So I try to bait the hook with a very tasty worm.

The Third Example

So far, these examples demonstrate the utility of a universal notation. The problem is, they are both from the world of mathematics; although math has concrete manifestations, it is intrinsically abstract. Are there any examples not from mathematics?

We then draw the following diagram on the whiteboard:

What is stunning about this picture is that as soon as I complete the drawing and say the words, "Here I have a simple circuit with a battery and a resistor," heads begin to bob. Of course, this is probably the simplest electrical circuit you could draw, but no matter. Just as the audience will applaud for itself when it recognizes the opening notes of Beethoven's Fifth Symphony, it will feel good about recognizing something technical. Without giving them too much time to think about it, I quickly add the symbols for a voltmeter and an ammeter.

Page 102: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

And, in a final bold stroke, I note that if the battery is 6 volts and the resistor 6 ohms, then one ampere of current flows in the circuit:

Please excuse my rendering of the ohm symbol; it is important for the effect to use the symbol, not the word "ohm."

Now, people know what a 6 volt battery is; they can buy one in the store. And most people will have a recollection, however vague, that resistors are measured, or come, in units of ohms. So when you finally draw the "1 A" on the diagram, indicating that one ampere of current flows in the circuit (note that we even indicate the direction of flow!), people are totally convinced they know what you are talking about, even if they never could remember Ohm's Law.

This is a very good time to mention that a Swedish student and an Australian hobbyist can communicate about this circuit without knowing each other's language. Once again, an international standard notation has come to the rescue. Only this time it is not purely mathematical; the objects in the diagram have real physical instantiations. Moreover, semantics is in play: not only is Ohm's Law implied, but also implied is the direction of current flow that comes from our notions of the positive and negative terminals of the battery, represented by the long and short horizontal lines. I typically spend a few moments on the richness of the information communicated in this simple diagram, and remark how hard it would be to do any electrical engineering at all if we didn't have this notation that is the same all over the world.

Incidentally, we have moved the accessibility threshold up to anyone

Page 103: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

having had one year of introductory physics.

And Now for the Relevance to Software...

Now is the time to summarize that we have seen how progress is made in all fields by having a common notation that can be used to express concepts, and how diagrams begin to take on precision and meaning once we attach semantics to the pictures. The most useful of these notations are understood the world over.

But before 1996 there was no common notation for software. Before the UML became an international standard, two software engineers, even if they spoke the same language, had no way to talk about their software. There were no conventions that were universally accepted around the world for describing software. No wonder progress was slow!

With the advent of the UML, however, software engineers have a common graphic vocabulary for talking about software. They can draw progressively complex diagrams to represent their software, just the way electrical engineers can draw progressively complex diagrams to represent their circuits. Things like nested diagrams become possible, so different levels of abstraction can now be expressed.

Rational's contribution in this area is huge. By formulating the UML and bringing the world's most influential companies -- IBM, Microsoft, HP, Oracle, etc.-- to agree on it was a major step. Getting an international standards body -- the Object Management Group -- to ratify it as a standard was the formal process that irreversibly cast the die. Everyone agreed to end the Tower of Babel approach and also agreed about how to talk about software.

The significance of the UML is now established, and we can move on.

Final Thoughts

Of course, the UML itself is an example of "technical jargon." It is now the way software professionals talk to each other about software. As each example of a notation becomes deeper and denser, it can become an esoteric and subtle way of expressing ideas and designs that are very rich and complex. Yet, at the outset, this (and any) notation, at its highest level of abstraction, is useful for communicating between professionals and "civilians." That is because its fundamental elements can still be used to transmit fundamental ideas. A truly great notation "nests" and has many levels of abstraction; the highest levels facilitate communication between people who are "farthest apart" in terms of background and context, whereas the lowest levels (with the most technical detail) aid communication between people who are "closest together" in terms of their understanding of the domain -- the technical specialists.

What has been interesting about our journey this month is that we have used analogy to explain a technical notation. We have avoided the "self reference" trap -- i.e., we have explained the UML without describing the UML itself. We have explained the jargon without using the jargon.

Page 104: splashpage apr.htmlCopyright Rational Software 2001€¦ · software development organizations from now into the forseeable future. In the conclusion of their cover story, "Software

Although this seems like a subterfuge at first ("Hey, wait a minute; I never even got to see a UML diagram!") it is, in fact, a requirement that you be able to explain it without using it.

Otherwise, those "civilians" are going to block the first time you draw one. With this introductory context, however, I believe that the first UML diagram you do draw will be much better received. They will hark back to "1+1," Pythagoras, and Ohm's Law, and know that you are doing the same thing for software constructs.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2001 | Privacy/Legal Information