2003 1:47:03 pm] - ibm · projects are helping businesses survive tough economic times. very...

118
http://bronze.rational.com:8169/index.jsp [7/14/2003 1:47:03 PM] Copyright Rational Software 2003 http://www.therationaledge.com/

Upload: others

Post on 06-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

http://bronze.rational.com:8169/index.jsp [7/14/2003 1:47:03 PM]

Copyright Rational Software 2003 http://www.therationaledge.com/

Page 2: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Editor's Notes:

Openings

Photo © 2003 Andrew Lampitt

It's nearly impossible these days to browse a journal focused on business in the IT industry and not find at least one article on either Web services or legacy system integration and maintenance. Of course, these topics are very often related, and the thousands of projects they represent are keeping many IT folks up at night for a range of reasons - from the worries of a looming deadline to the thrills of an inspired solution. The pressure is on, since successful projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So, is it counter intuitive that major enablers in these projects -- open standards and open computing technologies -- are explicitly designed for common usage?

As more software development organizations embrace open computing standards, and as more take the further step toward open source software, we thought it was high time to publish an introduction to all things "open." In his cover story, Douglas Heintzman, director of IBM Software Group's technical strategy team, explains the relationships among open standards, open computing, open source, free software, etc., and explains why businesses see increasing levels of strategic value in the transparency of open systems.

If you're curious why many software development organizations do a poor job with QA and testing, read Raj Kesarapalli's "Testing strategies for dysfunctional software organizations." He offers insights and remedies. We also have "A project manager's guide to the RUP" by Rational Unified Process gurus Philippe Kruchten and Per Kroll. And Murray Cantor gives us a systems engineering perspective on RUP, offering strategies for

Copyright Rational Software 2003 http://www.therationaledge.com/content/jul_03/index.jsp

Page 3: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Editorial

organizing large-scale RUP-based projects that require more than an understanding of iterative development technique.

We kick off two new article series in our technical section this month. Tom Milligan's techniques for improving IBM Rational ClearCase performance will help your development team get more out of their day-to-day VOB interactions. And frequent Edge contributors Peter Eeles and Maria Ericsson team up to discuss modeling enterprise initiatives with RUP and the architecture of a system of systems. Given our cover story on open computing, it's appropriate that our Rational Reader section features Modernizing Legacy Systems, reviewed by our legacy systems czar Eric Naiburg, and Component-based Product Line Engineering with UML, reviewed by senior RUP developer Bruce MacIsaac.

Remember last month's treatise by Fredrik Ferm on subsystems? (The Edge received a number in inquiries about that article.) This month, Bran Selic takes a somewhat lighter view of the subsystem animal in our Franklin's Kite section. If memory serves, this piece also includes the first poem we've published in nearly three years of online content delivery. Tell us what you think!

Happy iterations,

Mike PerrowEditor-in-Chief

Page 4: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

An introduction to open computing, open standards, and open source

by Douglas Heintzman

SWG Technical StrategyIBM Software Group

The IT industry is going through major changes. New concepts in technology, such as Web services and grid computing, are opening the door to tremendous opportunities for taking e-business to the next level of profitability. The potential of these technologies to transform business is truly remarkable, and open standards and open source software will play increasingly critical roles in this new world

Just as open standards were critical to the emergence of the Internet and the first generation of e-business, they will play a critical role in the next generation of e-business on demand.® IBM defines an on-demand business as being:

"A company whose business processes -- integrated end-to-end across the company and with key partners, suppliers and customers -- can respond with flexibility and speed to any customer demand, market opportunity or external threat. An on demand business has four key attributes: it is responsive, variable, focused and resilient."

In the first generation of e-business, standards allowed heterogeneous systems to communicate with each other and exchange data. This was critical to the development of the World Wide Web, e-markets, e-commerce, and inter-company integration. These capabilities drove cost down and productivity up, while increasing both speed to market and business agility. During the next ten years, business agility will continue to be the critical business differentiator for businesses and governments, and those that can shift their business strategies quickly in response to market dynamics, emerging opportunities, and competitive threats will prosper as on demand organizations. In this next generation of on demand e-business -- where computing resources become virtualized with

Copyright Rational Software 2003 http://www.therationaledge.com/content/jul_03/f_open_dh.jsp

Page 5: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Open computing, open standards and open source

corresponding flexibility and cost variability, where application function is discovered and bound to in a remote and just-in-time way, and where IT systems and business processes become integrated horizontally -- open computing and standards will become more important than ever.

This article will examine the role of standards as well as the role of open source software in the market today. While the roles of standards and open source software do overlap in that many companies who are using open source in part as a means of implementing open standards, the cores of their respective value propositions are distinct and should be discussed as such.

Definitions

One of the great challenges in the industry dialogue regarding "open" concepts is a clear definition for each of the various terms. Clarity of definition should contribute to clarity of discussion. For the purpose of this paper, the following definitions will be used:

Open computing -- a general and inclusive term that is used to describe a philosophy of building IT systems. In hardware, open computing manifests itself in the standardization of plug and card interfaces; and in software, through communication and programming interfaces. Open computing allows for considerable flexibility in modular integration of function and vendor independence.

Open standards -- interfaces or formats that are openly documented and have been accepted in the industry through either formal or de facto processes, and which are freely available for adoption by the industry. In the context of this article the term will be used to specifically refer to software interfaces. Examples that many people are familiar with include HTTP, HTML, WAP, TCP/IP, VoiceXML, XML, and SQL. They are typically built by software engineers from various IT/software companies who collaborate under the auspices of organizations such as W3C, OASIS, OMA, and IETF.

Proprietary -- describes interfaces that are developed by and controlled by a given company and have not been made freely available for adoption by the industry. Proprietary software uses non-public interfaces or formats. When an interface is non-public, the owner of the proprietary interface controls the it, including when and how the interface changes, who can adopt it, and how it is to be adopted.

Open source software -- software whose source code is published and made available to the public, enabling anyone to copy, modify and redistribute the source code without paying royalties or fees. Open source code evolves through community cooperation. These communities are composed of individual programmers as well as very large companies. Some examples of open source initiatives are Linux, Eclipse, Apache, Mozilla, and various projects hosted on SourceForge.

Free software and software libre -- terms that are roughly equivalent to Open Source. The term "free" is meant to describe the fact that the

Page 6: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Open computing, open standards and open source

process is open and accessible and anyone can contribute to it. "Free" is not meant to imply that there is no charge. "Free software" may be packaged with various features and services and distributed for a fee by a private company. The term "public domain" software is often erroneously used interchangeably with the term "free software" and "open source" software. In fact, "public domain" is a legal term that refers to software whose copyright is not owned by anyone, either because it has expired, or because it was donated without restriction to the public. Unlike open source software, public domain software has no copyright restrictions at all. Any party may use or modify public domain software.

Commercial software -- software that is distributed under commercial license agreements, usually for a fee. The main difference between the commercial software license and the open source license is that the recipient does not normally receive the right to copy, modify, or redistribute the software without fees or royalty obligations. Many people use the term "proprietary software" synonymously with "commercial software." Because of the potential confusion with the term "proprietary" in the context of standards and interfaces, and because commercial software may very well implement open, non-proprietary interfaces, this article will use the term "commercial software" to refer to non-open source software.

The open computing environment

Most major companies and governments have embraced the concept of open computing. They purchase IT goods and services from a variety of vendors and expect the technologies to work together. They wish to have the flexibility to deploy hardware and software in a specific way in order to address specific business problems. They do not wish to be subjected to the priorities and schedules of any particular vendor. Open computing provides them with a way to treat technology components as discrete modules that can be mixed and matched.

There are a number of common beliefs associated with this trend. In the first place, IT organizations investing in open computing believe it will maximize their flexibility and, consequently, the amount of business agility they have. They believe that open computing will allow them to rapidly adopt technology innovations and to exploit technology cost reductions. They believe open computing will provide them some degree of vendor independence. Well thought-out architecture and open standards are critical elements of open computing. Increasingly, companies are using open source software as a means of accelerating the adoption of open standards which subsequently allows them to implement open computing.

A brief history of open standards

Interfaces are the means by which the many elements of an application talk to each other and to other applications. Historically, interfaces were built by companies to allow internal and external programmers to add to the function of an application and to allow the value of an application to be enhanced with the capabilities of other applications. Companies that

Page 7: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Open computing, open standards and open source

owned these interfaces wielded considerable control, which in turn enabled them to sell vertical software "stacks" to customers, or to assemble software stacks1 that drove value into hardware platforms. This control model was very successful and the source of considerable profits for many of the major founders of the IT industry. When a company said they were an IBM shop, an HP shop, or a DEC shop, it really meant that, for the most part, they ran their business on a particular manufacturer's hardware and software.

In the 1980s computing technology started to become more stratified with much more distinct horizontal structures. Computer hardware became more commoditized and architectures more open. This led to greater degrees of modularization, interoperability, and the development of a marketplace for peripherals. The net effect was an increase in the rate of innovation, greater value for dollar, and a certain degree of loss of account control by hardware vendors. The software side of the equation also saw horizontal stratification. Operating systems became more generic and independent of hardware platforms. The middleware layer evolved, allowing for greater cost effectiveness and greater innovation at the client layer since application vendors were freed from having to worry about the plumbing.

These developments started to force interface standardization, which became vital in the effort to exploit networking technology and the growing usage of the Internet. The potential for computers to communicate with each other and for great stores of information to be virtualized was predicated on simple and standardized communications and interfaces. Therefore, while it may have been possible for a business to be an IBM, HP, or DEC shop in the past, it had become impossible for any one company to control the interfaces that ran the world's networks.

During the 1990's a number of major companies made strategic decisions to embrace this evolution toward open standards. These decisions were based on simple pragmatism: If we are going to live forever more in a networked world, then that networked world must run on open standards. This development has been good news for customers of IT and the IT industry in general. The skill and resources of these industry players have been critical in the development of robust, functional, and highly practical interfaces that are critical enablers of e-business.

The battle of "openness" is still being waged. For the most part businesses have embraced open standards as a means of ensuring degrees of flexibility and vendor independence. Many vendors have also embraced open standards, either because their role in the ecosystem as a provider of horizontal infrastructure or networking capability necessitates it, or because of their desire to participate in markets dominated by other players who use their market position to promote their proprietary interfaces. Some vendors have been successful in exploiting what economists call the "network effect" -- the tendency towards adoption of a common platform owing to the intersecting interests and interdependencies of ecosystem participants, including consumers. In turn, these companies have been able to exert control over programming interfaces and document formats to protect their market positions.

Page 8: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Open computing, open standards and open source

However, with the increasing momentum towards open standards and development of powerful alternative approaches such as XML, Web services, and J2EE (which isn't so much a standard as a widely adopted alternative programming model), the ability to exploit proprietary interfaces for competitive advantage will likely diminish.

The role of open source software

It has become clear that open source software (OSS) has an important role to play in the IT industry and business in general. Yet there is considerable confusion about the strengths and weaknesses of OSS. Some believe it will eventually replace the commercial software model, even that OSS is a critical element of a modern democracy. Others decry OSS as the single greatest threat to capitalism and the principles of intellectual property -- the ruin of Western society. Neither of these extremes is accurate. OSS, for the most part, represents a software development process. It can be leveraged to provide considerable value and complement commercial software products. At the same time, commercial software products will continue to play a critical role for the foreseeable future. (The rationale for this conclusion will be discussed below.)

Open source software (OSS) isn't developed by any one company; it is developed by a community, and it comes in many flavors. For example, the Linux movement was started by an individual who was quickly joined by many others who used the Internet to collaborate on the project we know today as the leading open source platform. Others such as Apache are offshoots of academic work. Some, such as Mozilla and Eclipse, were seeded by substantial code donations from major software companies.

Open source projects

OSS describes many different kinds of projects, each with different characteristics. These projects are long term and evolutionary in nature and don't usually have a defined end point. It is useful to break these groups down into four major subcategories, as described below.

1. "Academic" projects: University students and professors, and researchers from academic, public, and private research facilities use OSS as a means of peer review. OSS provides a mechanism for others to comment on their work, review its merit, and improve the code. Some researchers and companies publish their work to the public in order to promote discussion and innovation. Some companies use the publication of academic open source as a means of improving relations with other developers and researchers and even to attract talent. The emphasis is on innovative function. The code base typically isn't structured in such a way -- and isn't stable enough -- to deploy in a commercial setting. IBM Research is an active participant in this community and publishes many research projects for peer review.

2. "Foundation" projects: These projects focus on some of the lower layers of the software stack. The major examples are Linux (operating system), Eclipse (development tools framework), Apache

Page 9: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Open computing, open standards and open source

(HTTP server), Globus (grid computing), and Mozilla (Web browser). These types of projects represent the bulk of open source community development. They are delivering the bulk of the value of the Open Source movement and have by far the highest level of commercial deployments and support infrastructure. These projects have large and vibrant communities supporting them. Many also have large commercial software companies supporting them including IBM.

3. "Middleware" projects: These projects are focused on areas such as application servers, databases, and portals. This group includes Web application servers such as Tomcat and JBoss, Databases such as MySQL, and Portal software such as Jetspeed. In most cases these communities have small private companies at their core who have service business models. These communities have not attracted a critical mass of programmers and are only likely to have a marginal impact on the software industry in the near term. The dynamics of this category will be discussed below.

4. "Niche" projects: This group represents the bulk of the 56,000 registered projects at SourceForge.org (a Web-based repository of OSS projects). These projects are typically very small, and have very niche focus. Many are experiments or toolkits developed by industry organizations. They collectively represent a significant influence on the market but individually have negligible impact. Code from these projects is used to test against and as a source of ideas and techniques.

There is one other category -- "Walled Gardens" projects -- that uses some of the same community development processes, but which is not really an open source project: i.e., the source code is not available to be modified by the public at large. Walled garden projects are composed of small groups of companies that put in place a mechanism to collaborate on the development of some common technology that all participants may access. The source code from this kind of project is only available to members of the walled community, whose membership is typically by invitation only and supported by dues. These types of projects are not common. Commercial software companies may also use a walled garden approach as an internal development process.

The role of foundation and middleware projects

Of the open source project groups described above, the two most frequently discussed are the "foundation" projects and the "middleware" projects. These are the areas with potentially disruptive influence in the market.

It is quite normal that mature, common, "foundation" layers of technology will become commoditized over time. Value will migrate to higher and more innovative layers. This has certainly been true of the hardware industry and will also be true of the software industry. We have seen many examples of this already. TCP/IP stacks, compressions tools, Web servers, and browsers come to mind. As a rule, the adoption of open standards by an industry will accelerate this migration of value as the cost

Page 10: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Open computing, open standards and open source

of entry to a market falls and customers have greater choice. There have been some exceptions to this phenomenon -- notably desktop operating systems and basic office productivity function. These markets have been protected in large part by the "network effect" discussed earlier. The combination of network effect and control over proprietary interfaces has slowed the rate of commoditization and allowed the realization of considerable control and profitability from those product areas.

Open source software, specifically Linux, has the potential to disrupt the status quo. Already Linux has presented a serious challenge to Microsoft's strategy to move into the Unix server business with Windows/Intel economics. Linux has become a very capable and scalable server operating system. Many companies and governments have implemented Linux servers. Linux offers them some key advantages. On the low end Linux on Intel provides attractive economics. In the mid range Linux can run on high performance RISC servers and has great affinity with Unix. On the very high end Linux can run on mainframe business computers and even runs some of the world's most powerful super computers. This kind of scalability and multiplatform support has some very obvious advantages, including: 1) the ability to scale IT systems to match shifting business requirements, 2) the ability to optimize programmer productivity and minimize support requirements, and 3) the ability to leverage new innovations and take advantage of new cost structures regardless of platform.

To a lesser extent, Linux is also making inroads on the desktop. As companies and governments seek more cost effective ways to deliver IT services to their user populations, the combination of the efficiencies of Linux, the runtime support of Java, the Web application model supported by a Web browser, and the aggregation and personalization services of a portal -- combined with high function open source office alternatives such as "Open Office" -- are providing a viable and financially attractive alternative to the traditional Windows environment.

Health and sustainability for foundation OSS projects

The growing popularity of Linux and open standards leads to an interesting question: How far up the software stack will open standards and open source accelerate commoditization, and at what rate? There are many factors to consider, the two most important being 1) the health and sustainability of a community and 2) the involvement of major industry players.

Regarding the size and relative health of the open source communities, we can distinguish the work of software development communities focused on the foundation/platform projects and the work of major industry players focused on projects at higher levels in the software stack.

Let's take a look at these two factors driving today's OSS "foundation" projects.

Page 11: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Open computing, open standards and open source

Large and healthy developer communities

As noted earlier, projects such as Linux, Eclipse, Mozilla, Globus, and Apache have relative large and healthy developer communities. There are a number of reasons for this success, from both a business perspective as well as a developer perspective.

Business perspective

1. These open source projects provide business value to end user customers. Because the community development process enforces modularity and standards compliance, these projects yield high dividends and provide significant value for businesses and governments.

2. These open source projects represent areas where one or more significant commercial software vendor has taken an active sponsorship role and where there is solid overall support. And the rewards are increasingly clear. Commercial software vendors recognize the opportunity to amortize collective investment into commoditized layers in the software stack. Moreover, commercial software vendors believe they can disrupt competitive control points such as critical interfaces or software stack components, eliminate vendor lock-in, and support multiple platforms. This is especially true of Linux.

3. Critical skills to support ongoing foundation projects are readily available in the software development community. These skills come from the hacker2 community, from the consumers of the technology, from companies with complimentary business models, from commercial software vendors, and from academia. All of these skill sources are fed by academic institutions that use OSS code to teach computer science concepts. Consequently, there is an educated population of programmers emerging out of colleges and universities who are familiar with the open source environment and with the technologies being built in open source projects.

4. Long-term plans for improvements (design, development, test, documentation, etc). The increasing number of conferences, commercial interest, and worldwide collaboration regarding the future of open source computing is a strong indication of its maturity and reveals the degree of importance associated with it.

Developer perspective

1. Passionate interest in developing and enhancing code. These projects make for a highly visible "artistic canvas" for programmers to demonstrate their skills and technique. Some developers participate, at least in part, to establish a reputation and build credentials to interest potential employers.

2. Significant overlap between sets of users and developers. The direct participation of users and developers in the use of open standards and open source software leads to rapid incorporation of

Page 12: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Open computing, open standards and open source

domain requirements into the code base and a significant degree of project focus on business oriented as well as technical problems. This focus supports the relevancy and value of the project's output and subsequently the health of community participation.

3. Diverse, interactive community. The sheer volume and the many diverse perspectives of project participants leads to rapid rate of innovation, optimization, and bug fixing.

4. Strong overall project/code leadership. A strong leadership style of the open source "maintainer" ( the person in charge of deciding what goes in and what stays out of any particular release) is essential. Linus Torvalds, the creator of the Linux operating kernel, is a great example of this leadership. Strong leadership promotes focus and momentum and generally helps keep the project moving in the right direction. This success factor, like many others becomes circular and self promoting. The important successful projects attract strong leadership and the strong leadership in turn promotes the health of the project.

The foundation projects mentioned above are healthy and appear to be sustainable.

Involvement of major industry players

The second factor, involvement of major industry players toward viable OSS solutions, is also significant. Linux, Eclipse, Mozilla, Globus, and Apache all have active participation and support from some of the largest and most influential software companies in the industry. The Linux community counts HP, IBM, Sun, and Oracle among its active contributors. Members of the Eclipse community include Borland, IBM, Oracle, Sybase, Montavista, and Red Hat, and many others. Mozilla has participation by Netscape, AOL, IBM, Red Hat, and Sun. Globus is sponsored by IBM, Microsoft, and CISCO. Apache lists Apple, IBM, Sun, CollabNet, and Redhat among its active contributors. These lists are, of course, small samples of the large number of companies that are involved in these initiatives. Their members also include contributors from Brown University, Massachusetts Iinstitute of Technology, Stanford University, and many other academic institutions.

Why are major software companies increasingly interested in open standards and the open source movement? Primarily because investing in these communities makes good business sense, for three major reasons:

1. Drive rapid adoption of open standards. Those companies that have made a strategic decisions to back open standards and whose business model is based on widespread adoption of open standards support open source projects to help business and governments get ready access to high-quality, open source implementations of open standards and speed industry adoption.

2. Use OSS as a strategic business tool. OSS can eliminate competitor "lock-in," create a competitive playing field, and open up new business opportunities. Vendors who support OSS make money

Page 13: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Open computing, open standards and open source

on OSS services, as well as through the sale of components based on open source platforms. And their customer base is satisfied by the ability to extend their systems via additional components and services from alternative vendors, as needed. OSS can effectively "level the playing field" by removing many of the structural advantages that a company controlling proprietary control points may have.

3. Extend mindshare. Active participation in an open source community can enhance partnership relationships and help build relationships with a broad spectrum of developers.

The importance of the participation of these commercial software vendors in the advance of OSS adoption can not be understated. These companies provide a critical mass of highly skilled programming expertise and core competence. The involvement of these companies provides stability to open source communities and reassurance to consumers of the technology as to its long-term viability. The technical resources these companies provide can kick start an open source community and provide critical technical and domain expertise as well. For example, the donations of the Netscape code base to Mozilla by AOL and the Eclipse framework to the Eclipse Project by IBM have enabled two new OSS movements over the past three years. In addition, the financial resources of these companies can fund critical government standards compliance testing. A recent example of this is Oracle's and IBM's announcing their commitment -- likely to mean millions of dollars -- to get Linux certified to meet US government security standards.

Challenges for middleware OSS projects

Middleware OSS represents a potential opportunity to many entrepreneurs -- typically small companies seeking to develop alternatives to commercial middleware technology. And as we know, opportunity invites competition, and competition drives innovation. Some middleware OSS projects may migrate into the foundation efforts and be picked up by those communities. We have already seen basic application server functions migrating into the Apache project. This function will be very useful to many business and governments and will likely be used widely in the near future.

However, middleware OSS projects have not yet had much impact in the enterprise space nor have they had much impact in the mid market space. For a variety of reasons, they do not share the characteristics that have made the foundation projects successful, including the critical mass of a healthy and sustainable community, or the support of major vendors.

More importantly, middleware projects currently lack enterprise features and characteristics of commercial alternatives such as scalability, reliability, enterprise support, multiple languages, robust documentation, and integrated tooling. Compared to the OSS foundation level projects, which have attracted support from major software vendors, it has also proven to be difficult for middleware OSS communities to raise the funds required to purchase testing tools and do government standards

Page 14: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Open computing, open standards and open source

compliance testing. Security is another significant concern. The architecture of commercial offerings typically address security in ways that OSS middleware projects cannot. The mid market is dominated by ISVs who develop industry solutions that require many "enterprise' characteristics for software they either imbed or require as prerequisite.

The commercial application server vendors are rapidly innovating and creating function that allows them to competitively differentiate their products and resist commoditization due to high-value capabilities such as portals, business processes integration, automation, nationalization, and other strengths. The middleware market will continue to evolve and very likely be characterized by high rates of innovation.

A quick note about licensing

There are many arguments that go back and forth about the role that licensing plays in the potential success of OSS projects. In the extreme, some industry players have referred to OSS licenses as "cancer." In my opinion, arguments over licensing are overblown. OSS projects use various types of licensing ranging, from a very flexible BSD (Berkley Software Distribution) type license which has no obligations other than publication of copyright notice and sometimes an attribution requirement, to a GPL (GNU General Public License) which requires source code for modifications that you distribute to be made available under GPL. Some have suggested that this restriction limits intellectual property rights and have characterized it as "viral." But note: the requirement to provide source code for modifications only applies if you intend to redistribute the code outside of your organization. To an end user, the requirement to provide source code changes is not very important.

There is considerable experience in the industry working with the various OSS licenses. There is a general consensus that for the most part these licenses do not represent a barrier to the integration of OSS into business solutions and will likely have little, if any, impact on the success of OSS one way or the other.

What is driving open computing, open standards, and open source success?

The reasons for the success of open computing and open standards are obvious. They are a necessary feature of a networked world, and they are essential to the critical business factors of flexibility and business agility.

However, the explanation for the popularity of open source software is a little less clear. As we've discussed, not all OSS projects are created equal, and not all have been successful in the market or have the potential to be successful in the market. In fact, in the grand scheme of things, only a small number of OSS projects fall into this category. There are some general reasons why businesses and governments look to OSS and some specific reasons why they are looking at the foundation projects, especially Linux, in particular.

Page 15: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Open computing, open standards and open source

General benefits

In general, businesses and governments see value in the following OSS features:

1. Flexibility to modify. Some businesses or governments require specialized modifications to a code base to accommodate specific business or technical requirements. OSS offers this flexibility. The National Security Agency (NSA) has done just this and created a secure version of Linux

2. Cost effectiveness. OSS often has some attractive up-front cost advantages, although there is much debate as to the total cost of ownership (TCO). There is anecdotal evidence that some companies have realized considerable license savings. On the other hand, it is argued that scarcity of skills translates to higher support and maintenance costs that nullify the up-front cost advantage. The economic case will vary from geography to geography as the availability of skill and labor rates vary. Unfortunately, there is no clear data on the total cost of ownership of OSS vs. commercial software yet.

Specific benefits: the case for Linux

In the specific case of the foundation projects, and especially in the particular case of Linux, businesses and governments see value in:

1. Multiplatform support. Some businesses and governments have realized advantages in deploying a common operating environment across multiple hardware platform architectures. They also see some advantage in being able to scale their applications beyond the confines of one particular hardware architecture. Linux for example is available on systems ranging from cell phones to super computers. An enterprise might deploy an application on an Intel platform, then need to scale the application to midrange systems such as IBM's pSeries, Sun's SPARC family, or even to IBM's mainframe zSeries systems. There may be other reasons to move platforms such as reliability, manageability, corporate or departmental mergers, security requirements, or the exploitation of some specialized capability. This kind of flexibility allows businesses and governments to match their IT support to their business requirements and have that IT support change as the business requirements change.

2. Standards enforcement/promotion. Standards compliance is a natural and inevitable characteristic of community developed software. So some businesses and governments have decided to use OSS as a means of promoting or enforcing open standards and open computing. Implementing Linux or Apache for example implies the implementation of many of the most important Internet standards. Strict standards adherence at the lower foundation layers allows permits considerable flexibility of configuration and of choice of application and vendor.

Page 16: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Open computing, open standards and open source

3. Auditability/Security. Some businesses and governments believe that being able to see the underlying mechanics of code enhances their confidence in the reliability and security of that code. While this transparency argument does make aesthetic sense and many cite it as a rationale for selecting OSS alternatives, there isn't any hard data to suggest that OSS is inherently any more secure or stable than commercial software. Linux offers the best case due to its maturity, the involvement of major software vendors, and the sheer volume of reviewers -- all of which means that bugs and security holes are discovered and patched very quickly. On the other hand, in many domains it is quite clear that the commercial alternatives are both more reliable and more secure owing to their architecture and the quality control measures that many commercial vendors enforce. Without question, security needs to be designed into a program, and a program's security needs to be assessed on a case by case basis. In any case, the platform foundation layer of the software stack is where most of this scrutiny is focused.

4. Linux as an agent of economic development. This situation is specific to governments and to the case of Linux and occurs typically in developing countries. Governments are using their purchasing power and influence in the economy to create a critical mass of demand to stimulate the development of local skills and local business activity around the ecosystem that evolves around an OSS platform. The hope is to stimulate a domestic software industry. We even see governments sponsoring the establishment of Linux competency centers to accelerate Linux adoption and to act as a focal point for skill development and investment.

Conclusion

Open computing, open standards, open source software, and commercial software which implements open standards are all succeeding because they are enablers of technological evolution and because businesses and governments recognize value in them. Businesses and governments will strive to attain the flexibility and business agility of the on demand world. Open computing platforms -- both hardware and software -- are essential underpinnings of the journey towards on demand computing. The role that standards have played in the evolution of e-business has been well established. The role that open standards will play as part of the open commercial and open source projects that embrace those standards is central to the further evolution of e-business toward more responsive, focused, and resilient e-business capability.

Businesses and governments are embracing open computing, open standards, and some open source projects. IBM has made the strategic decision to embrace these concepts and is aligning its hardware, software, and services business to support its customers on the journey toward on demand.

Notes

Page 17: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Open computing, open standards and open source

1 Generally speaking, "software stack" refers to the complete array of software -- from the operating system, through the middleware layer, to various specific components -- used to accomplish a specific business goal; for example, the assemblage of software designed for a financial services solution.

2 The word "hacker", in this context, is used by its original meaning. Originally, "hacker" referred to a person who is highly skilled and interested in computers and not a person engaged in criminal mischief.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2003 | Privacy/Legal Information

Page 18: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Copyright Rational Software 2003 http://www.therationaledge.com/content/jul_03/f_testing_rk.jsp

Testing strategies for dysfunctionalsoftware organizations

by Raj Kesarapalli

Software Consultant

The October 2002 edition of Software Development Magazine1 reported the results of a survey that included a question on what roles (they listed twenty) in software organizations are hard to fill. Which roles made the top three? This may come as no surprise: QA engineer, QA specialist/architect, and metrics engineer.

I talk regularly to quite a few CEOs and CTOs in Silicon Valley, and although they have no problems staffing their engineering, marketing, and sales departments, staffing a QA department with the right people seems to be very challenging. "All good QA engineers would rather be developers," these managers complain. "The reason we have testing problems is that we can't find good QA engineers."

Well, I'm here to tell you that simply isn't true. In fact, the skill set required for development is very different from the optimum skill set for testing. Often, organizations fail at testing because they have unrealistic expectations for their QA personnel. They ask them to do the impossible. They make their QA people responsible for something they can't control, so that no matter how good they are, these people are likely to fail.

To make their organizations successful at testing, managers need to understand how testing works and rethink their testing strategies. In this article, I'll try to explain some of the issues and suggest improvement strategies.

Testing: Why doesn't it work better?

Today, we live in a world in which defects in software products have become the norm. Most software products are not tested thoroughly, so

Page 19: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Testing strategies for dysfunctional software organizations

both users and the organizations that write the software expect them to have bugs. Why does this condition prevail?

Waiting until the eleventh hour

Most software projects don't finish on time because most of the defects testers do find before release are discovered at the eleventh hour -- that's when most testing departments receive the product for testing. And, as the time frame is very short and unrealistic, the testing department is put under tremendous pressure to release the product without testing it thoroughly.

Although some project managers understand the importance of testing earlier, many don't have a handle on how to accomplish it. Some have tried hiring a lot of testing personnel and buying expensive tools, only to eventually give up, simply because they don't see enough return on their investment.

Whose job is it anyway?

Another problem is assigning responsibility for testing. Inexperienced project managers often think the testing department should be fully responsible for product quality -- that developers have nothing to do with it. They don't even expect developers to test changes they make before checking in their code. This may delay defect detection until much later in the project, which results in missed deadlines.

In other cases, project managers do understand the need for some testing before developers check in changes, but they don't understand how to accomplish it without "wasting" development time. In yet other cases, managers ask developers to test before checking in changes, but they have no visibility into the amount of testing the developers actually do and no way of enforcing their requirement. They must depend on the developers to "do the right thing." As we all know from experience, unless you have the right processes in place, this strategy simply does not work.

Unrealistic expectations

Most testers work at the application level; they don't necessarily understand the source code, so it is hard for them to understand and test all the possible failure scenarios in an application. And, even if they do understand the source code and all of these scenarios, it is very hard, expensive, and impractical to test all of them at the application level. Unfortunately, this is what most organizations ask their testers to do, and the costs are enormous.

For example, suppose you are in charge of QA for a spreadsheet application. A significant feature of a spreadsheet application is its formula-processing component, and there are several different mathematical operators one could use for this. It would take at least a few hundred tests to thoroughly test the application's formula processing capabilities. Expecting your testing team to test the application by entering all possible permutations and combinations of formulae each day would be unrealistic.

Page 20: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Testing strategies for dysfunctional software organizations

The process would be very time consuming and error prone; and it simply wouldn't scale, because iterative projects increase functionality as they evolve.

Some managers invest heavily in GUI tools to automate this task, but then most of these testing teams spend their time maintaining the tool automation scripts rather than testing the application itself. GUI automation tools have their place, and they should be used with caution; they are certainly not the answer for all your testing needs. Still other managers hire lots of testing personnel for high-demand projects -- but there is still nothing to prevent developers from checking in broken code.

Making improvements

Fortunately, there is a better way to test. It begins with investing in a component-level automated testing framework. Since developers are familiar with their code, they have the capability to test an individual component or module without having to build the entire application. They can write component-level tests only once. Then, before they check in their changes, they can simply start the automated test suite, wait for it to finish, and make sure everything is fine. In our spreadsheet application example, developers would write component level tests for the formulae processing component just once; then, the test automation framework would run these tests each time the developer changed the source code for that component.

Most testing can be automated using this approach, which has significant advantages. First and foremost, component-level testing is very simple and inexpensive compared to testing the entire application. A significant percentage of the testing can and should be done at this level; even though it takes up a bit of a developer's time, the returns are huge. Component-level testing is much more cost effective and can be easily automated. And what does this better approach yield in the end? A development team that only checks in verified changes, and a testing team that focuses on the application as a whole, where testers can best apply their talents.

Introducing the unit test

Testing at the individual component level is known as "unit testing." Unit tests are designed to test a single class or component or module in isolation.

Unit testing does require developers to write a few additional lines of code, but it doesn't have to be a waste of their time. After all, unit tests actually make developers more productive by helping them detect problems early in the project lifecycle. Since unit tests are command-line driven, they don't require expensive GUI tools to automate them. In fact, with a small investment in a "unit testing framework" you can completely automate the running of unit tests. Developers have to write these tests only once. Then, they can run these tests each time they make changes to the associated class or component.

Page 21: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Testing strategies for dysfunctional software organizations

In our spreadsheet application example, developers could simply write unit tests for the formula-processing component to test it in isolation. Running these tests would be a lot faster than running a comprehensive test set from the application GUI.

Unit tests also aid developers in debugging. For the development projects I've worked on, debugging an entire application meant rebooting the machine every time the application failed. Had there been unit tests for each component, we could have debugged at the component level without rebooting at all.

Finally, unit tests help to unify project teams. Even though developers write the unit tests, there is nothing to stop the QA personnel from running them. Developers can run unit tests only for the components they are working on, and then QA personnel can run unit tests for all the components at regular project milestones, and analyze the results. Of course it is important that QA teams test the application as a whole as well, so let's consider the relationship between component testing and testing the entire application.

The role of system testing in the unit test strategy

Testing the application as a whole -- or system testing -- is what QA teams are uniquely responsible for. In most cases, what you can test easily through unit testing requires significantly more effort and resources to test during system testing. Yet, many organizations don't adopt unit testing and put an excessive burden on their QA team, which often leads to QA team failure and poor product quality.

Unit testing is efficient, inexpensive, and simple, whereas system testing is time consuming, expensive, and often done manually -- which can lead to errors. However, both methods are necessary. The trick is to strike the right balance between the two. Early involvement by a QA architect can help project managers decide the most effective testing strategy. In our spreadsheet application example, unit tests can ensure that all the different formulae work, whereas a single system test can ensure that the formula processing component is integrated properly into the application.

If you expect drastic changes in your GUI, then a GUI automation tool may not be helpful. However, if your GUI is mature and will not change a lot, consider investing in a GUI test automation tool to automate system tests. With a unit testing framework in place you will need fewer system tests, so the time required to create and maintain the GUI test automation scripts will be minimal.

Hurdles in the unit testing strategy

If unit testing is the best way to go, why don't more software development organizations adopt it? Well, there are a few hurdles managers should be aware of.

First, as I've already mentioned, in many organizations, project managers do not expect developers to write unit tests, and most QA managers don't

Page 22: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Testing strategies for dysfunctional software organizations

have anything to say about how the development team works. The unit testing strategy will not work unless engineering managers, senior management, and even the developers buy into it. Here is a real world example. One of my friends who works for a large ERP vendor tells me that when his manager initially proposed unit testing, some members of his development team resisted the idea; but his manager wanted to try it because their last project had serious quality problems. Though they were reluctant, he made the developers do unit testing. Now, a year later, my friend tells me that the only defective components they find in system tests are the ones for which they forget to write unit tests. The entire development team has bought into the idea, because unit testing is actually helping them be more productive.

Second , another hurdle is that many senior managers do not consider unit testing until too late in the lifecycle. Most poorly designed applications are the work of developers who lack a good understanding of component architectures. And poorly designed applications are hard to test, too. It pays to have your architect think about testing during the modeling stage; better yet, you should hire a QA architect early -- in the Inception phase of the project. The unit testing strategy not only helps simplify the testing process, but also forces developers to write well "componentized," reusable code.2

A third hurdle is that, quite often, existing applications are not componentized, so it would be difficult for a unit testing strategy to work. (If you want to find out how well your application is componentized, just ask your developers how easy it is to write unit tests.) In this case, a good strategy is to treat the entire application as one component and implement a command-line interface that drives the application from the command-line, without invoking the GUI. This will help you automate testing through simple command-line scripts, as opposed to testing totally through the GUI. As command-line testing is much less expensive than GUI testing, you'll benefit greatly from this approach. And if you happen to be in this situation, I'd also recommend bringing in a testing consultant.

Finally, the setup of the unit testing infrastructure is critical to success. Without a test harness that runs the unit tests, it will be hard to sell the concept to the development team. It is important to have the harness ready before asking the developers to start writing unit tests. Otherwise, you will be depending on individual developers and testers to do "the right thing," and this simply will not work. A test harness will give managers greater visibility into the developers' successes and failures, so they will be more likely to take unit testing seriously. A testing framework is a one-time investment, so again, it makes sense to bring in a consultant to implement it. Once the framework is in place, developers can easily write and add unit tests on the top of the framework.

A good unit testing framework should have the following features:

● Enough flexibility to allow developers to run unit tests for individual components before they check in their changes, and to allow testers to run unit tests for multiple components with a single command.

Page 23: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Testing strategies for dysfunctional software organizations

● Good analysis features (such as the ability to compare test results against a baseline, etc.).

● Good reporting features (targeted toward technical as well as managerial audiences).

● Complete automation capability for running tests.

● Command-line driven (as well as GUI driven).

● Open architecture so that it can be integrated into third-party build automation tools.

Conclusion: Find a balance

Most organizations incorrectly attribute their testing woes to not being able to find the right testing personnel; but in reality, they fail at testing because of their approach. Good test practice involves planning early, striking a good balance between unit testing and system testing, making a strong, organization-wide commitment to quality, and establishing a framework that helps automate testing and provides greater visibility to senior management. Bringing in a QA architect early is probably the best investment a development team can make.

This approach to testing can greatly reduce testing costs and improve both developer/tester productivity as well as the overall application architecture. Indirectly, it improves the maintainability and reusability of the source code and helps you find defects early in the development cycle. It really helps every facet of your development process.

Note: If you have other successful test strategies that you'd like to share, I'd be interested in hearing about them. You can reach me at [email protected].

Notes

1 http://www.sdmagazine.com

2 To understand how unit tests result in better code, take a look at my previous Rational Edge article "Promoting Component Architectures in Dysfunctional Organizations."

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Page 24: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The developers and consultants at TechnologySolution Partners LLC (TSP) start every projectwith the same goal–exceed customer expec-tations. Based in Shelton, Connecticut (about60 miles north of New York City), TSP is aninformation technology consulting and servic-es company that combines technical expertisewith powerful development tools, includingIBM® Rational® Rapid Developer, to providecomprehensive and cost-effective softwaredevelopment, management and monitoringservices.

Architected RAD Gets Project on TrackOn a recent project, TSP faced a significantchallenge to help a leading New York Cityhospital network (a $5 billion enterprise), cre-ate and deploy a full-featured, high-qualitycardiac assessment and reporting system.The project was initially attempted unsuccess-fully by a large, well-known consulting firm.After watching the consulting firm spend near-ly a year to deliver a system that not onlyfailed to meet user demands, but also neg-lected to take into account the requirements ofits partner facility network, the hospital net-work sought the professional help of TSP.

In addition to the professional expertise com-prised by the TSP team, they also leveragedthe benefits of Rational Rapid Developer — anarchitected rapid application development(ARAD) environment. This powerful combina-tion enabled TSP to quickly build a completen-tier, enterprise-class system that not onlymeets the hospital’s requirements, but pro-vides additional functionality that is helpingdoctors better assess and analyze cardiacsurgeries.

TSP’s President Girish Gupta reports thatRational Rapid Developer was a great help tohis team’s success on the project. “WithoutRapid Developer, we would have needed 15people — instead of just five — and anotherthree managers to manage those 15 people.Also, we could have never accomplished whatwe wanted to in the short timeframe that wehad. With Rapid Developer, we are able tomake everything happen on time — developand deliver the application, deploy it, andmaintain it. We could do all these things,because with Rapid Developer our focus wason the application and the business problem,not developing the framework or trying to fig-ure out how to code something in Java™.”

Industry:Health care services

Organization:Major New York City-based hospital network

Description:Cardiac assessment andreporting system

Business Problem:Building a Web-based cardiacassessment and reporting system to help increase productivity, reduce costs andenhance customer satisfaction.

Solution:IBM Rational Rapid Developer,Microsoft SQL Server, Microsoft MTS

Key Benefits:Built a complete n-tier, enterprise-class cardiacassessment system that metand surpassed customerrequirements.

Salvaged a stalled development effort started by another consulting firm.

Jump-started developmentefforts by reverse-engineeringdatabase design.

Gained customer buy-in earlyby rapidly creating prototypes,and refining them with an iterative process.

Saved time and resources bydramatically streamlining J2EEdevelopment and n-tier infrastructure efforts – resultingin ability to focus on highervalue areas – such as comprehensive documenta-tion, on-line help and addedfunctionality.

Delivered a HIPAA-compliant,problem-free application on atight schedule.

Completed a project thatwould typically require 15developers with just five.

TSP Delivers Comprehensive Cardiac Assessment andReporting System to New York City-Based Hospital Network On Time Using IBM Rational Rapid Developer

TSP rapidly constructed a well-architected, n-tier cardiac assessment and reporting system using Rational RapidDeveloper. Timesaving features like reverse-engineering and application prototyping led to early customer buy-in, streamlined development efforts, and created the ability to concentrate on real business issues.

Page 25: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Simplified N–tier Application Development Shortens Development EffortsThe five developers from TSP used RationalRapid Developer to complete the project in justthree months from the time the specificationswere set — an impressive accomplishmentconsidering the complexity of the application,and its numerous requirements.

Gupta explains, “The State of New Yorkrequires all New York hospitals that do cardiacsurgery to report specific data on cardiacpatients to the state every quarter. The hospitalnetwork facilities had used different systemsfor the three types of cardiac surgery: adult,pediatric, and percutaneous. At each hospitalthe data was manually collected on forms andentered into different applications. With somany systems, and the manual work involved,the reports were frequently late and requiredintensive effort by network administrators tomanage reporting to the state and analyzingacross the network.”

When we took over the project, we said ‘Wewant to go through the basics. Tell us what theneeds are.’ We sat down with the users andwent through all the requirements. We startedfrom scratch. We were under the gun to deliv-er, because everyone had already been wait-ing for this system, and the next deadline toreport results to the state was fast approach-ing—we knew Rational Rapid Developerwould be an ideal fit.”

Focus on Business ValueInitially, the basic requirements of the hospitalswere simple — they needed a HIPAA-compli-ant system that would enable the reporting ofcardiac data to New York State, on-time. TSPdelivered a comprehensive solution that notonly fulfilled those requirements, but alsoincluded additional functionality. Gupta notesthat because Rational Rapid Developerenabled the TSP team to focus on the busi-ness logic of the application — and not theimplementation details of various layers in then-tier system — they had time to add variousfeatures that the hospital now finds indispensa-ble. “One of the things we were able toachieve was validation of the data so that itmeets the state’s requirements. When the hos-pitals submit data, the system checks it toensure validity. That was very important,

because now their data is never rejected bythe state. Last year was the first time they wereable to send the data on-time for all four quar-ters — and each report was accepted. Thiswas a big achievement, with very high busi-ness value,” Gupta says.

“We were able to provide all of those features,all those bells and whistles that the users want-ed, because Rapid Developer saved us somuch time in other areas. With RapidDeveloper, we were able to focus on the appli-cation, not on how to make it happen. Becausewe were not bogged down with how to con-struct the Web pages, for example, we couldtake that proactive approach,” Gupta con-cludes.

Rapid, Iterative Prototyping Led toCustomer SatisfactionRecognizing the shortcomings of the earlierapproach, the TSP team knew they had to getbuy-in from their customers as early as possi-ble. They applied an iterative developmentapproach and used Rational Rapid Developerto rapidly develop a set of easy-to-understand,browser-based user interfaces, which theythen demonstrated for the hospital. Through acombination of visual modeling and automatedcode construction and deployment, RationalRapid Developer enabled the TSP team to rap-idly prototype and deploy full use cases forreview by their customer. Gupta recalls, “Whenwe started, the users were still not sure whatthey wanted, so we had to go through analysisiterations to determine what they wanted. Inthe meantime, we developed demo versions,which we standardized using guidelines fromthe Center for Disease Control (CDC). WithRapid Developer, we could develop differentconcepts quickly. We showed them three orfour designs with different themes. Using therich style repository functionality of RapidDeveloper, TSP professionals were able tostandardize the look and feel across the entireapplication. Additionally, we showed them howa user would navigate the site. They pickedthe theme they liked, and we were set.”

New Design Improved Ease of Use and PerformanceRajasekhar Gopisetti, Manager ofDevelopment at TSP and a key contributor tothe project, notes, “While it took months for the

“With [IBM Rational] Rapid

Developer, we are able to

make everything happen on

time — develop and deliver

the application, deploy it,

and maintain it. We could do

all these things, because with

Rapid Developer our focus

was on the application and

the business problem, not

developing the framework

or trying to figure out how

to code something in Java.”

Girish Gupta,President, Technology Solution Partners LLC

Page 26: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

other firm to develop a system that could bedemonstrated, TSP was able to demonstrate acoherent and consistent user interface withRational Rapid Developer in just a few days.Rational Rapid Developer helped TSP developan interface that was not only better organized– with a separate page for each section – italso performed better because only a fractionof the data was being retrieved or saved whena page was displayed or submitted.”

Jump-started Development With the user interface modeled and devel-oped in Rational Rapid Developer, the TSPdevelopers had a running start when theybegan implementing the business logic for the application. To further accelerate the early phases of development, the team usedRational Rapid Developer to leverage some ofthe work started by the other consulting firm.Using Rapid Developer, TSP reverse-engi-neered the existing database into a databasemodel. They then made changes to the modelin Rational Rapid Developer, created a newdatabase schema, and persisted the newdatabase for Microsoft SQL Server. “Thereverse-engineering really gave us a healthyjumpstart,” Gupta notes.

Reduced Maintenance and Testing CostsRational Rapid Developer provided the TSPteam with significant advantages in the testingand maintenance phases of the developmentlife cycle as well. Gopisetti states, “WithRational Rapid Developer, maintenance is veryeasy. The developer is only dealing with thebusiness logic, and nothing else — not thedatabase construction, or how he is going toupdate it. Rapid Developer handles all that foryou. Developers are most focused on writingthe business logic — the true value. That is areal advantage in the maintenance of theapplication. We don’t have to mandate a par-ticular style or development framework; it is alltaken care of by Rapid Developer. Since it isall automatically constructed code, we spenda lot less time in maintenance, debugging,testing and deployment.”

Gupta agrees, “With Rapid Developer we don’teven need a person to maintain the code afterdeployment. In the past we would need some-body who knew the code intimately. Now, you

go into the object model and update it directly.It is that simple. And our testers can focus onapplication-oriented tests rather than technolo-gy-oriented tests. We are not testing to seewhy a page wasn’t working because it wascoded wrong in Java; we are testing to make sure the application’s business logic was correct.”

Exploring New OpportunitiesRational Rapid Developer helps TSP managethe complexity of multiple technology platformsby constructing complete applications acrossall tiers of a system, for a wide range ofdeployment environments. This provides TSPcustomers with technology vendor independ-ence and insulates their development teamsnot only from the complexity of n-tier develop-ment, but also from rapidly changing technolo-gies. The ability of Rational Rapid Developer toconstruct these multi-tier applications for awide range of Web servers, applicationservers, relational and mainframe databases,and message transport technologies is helpingTSP pursue new opportunities.

“On this project, we are using Microsoft SQLServer 2000, the Windows 2000 platform, andCOM/DNA technology,” Gupta explains.“Recently, we started working with a prospec-tive customer, and they wanted to see what wecould do in the [IBM] WebSphere® and [BEA]WebLogic areas. Using Rapid Developer, wequickly generated the cardiac application forthose environments. The application workedperfectly and we were able to demonstrate iton-line. With Rapid Developer, we can quicklybring the application to any technology weneed to.”

More Productive Developers,Substantial Cost SavingsOne of the key benefits that TSP realized as aresult of using Rational Rapid Developer is theincreased productivity of new Java program-mers on complex projects, such as the onethey completed for the New York hospitals.Gupta explains, “We don’t have to commit to alarge team of highly efficient Java program-mers — we need good application developers.The five people on this project were not experi-enced, four-year programmers. They were ableto handle it because they understood therequirements and how to code the businesslogic. Developers coming from Visual Basic, or

“While it took months for the

other firm to develop a system

that could be demonstrated,

TSP was able to demonstrate

a coherent and consistent

user interface with Rational

Rapid Developer in just a few

days… it also performed bet-

ter because only a fraction of

the data was being retrieved

or saved when a page was

displayed or submitted.”

Rajasekhar Gopisetti,Manager of Development,Technology Solution PartnersLLC

Page 27: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

“We can complete a project

using [IBM Rational] Rapid

Developer in 35 percent of

the time it would take without

[IBM Rational] Rapid Developer.”

Girish Gupta,President, Technology Solution Partners LLC

18880 Homestead RoadCupertino, CA 95014

20 Maguire RoadLexington, MA 02421

Toll-free: (800) 728-1212e-mail: [email protected]: www.rational.com

International Locations:www.rational.com/worldwide

About Rational

Rational provides a softwaredevelopment platform thatimproves the speed, quality, andpredictability of software projects.This integrated, full life-cyclesolution combines software engi-neering best practices, market-leading tools, and professionalservices. Ninety-six of the Fortune100 rely on Rational tools andservices to build better software,faster. This open platform isextended by partners who pro-vide more than 500 complemen-tary products and services.

Rational Software

Dual Headquarters

IBM,the IBMlogo, and WebSphere are trademarks of International Business Machines Corporation in the United States, other countries, or both. Rational is a trademark or registered trademark of Rational SoftwareCorporation in the United States, other countries or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, orboth. Other company, product or service names may be trademarks or service marks of others.© Copyright Rational Software Corporation, 2003. All rights reserved. Rational Software Corporation is a wholly owned subsidiary of IBM Corp.Made in the U.S.A.CS594A 5/03. Subject to change without notice.

JavaScript — it is easy for them.”

As president of TSP, Gupta reports thatRational Rapid Developer has provided excep-tional benefits from a business perspective, byhelping shorten development times, reducecosts, and deliver complete, highly reliablesolutions. “Rapid Developer enforces a stan-dard way of developing the application. Thisreduces the cost of maintaining the applica-tion. And since everything is modeled right inRapid Developer, and the model explains themajority of the application, you don’t need tocreate intensive documentation. The documen-tation effort is reduced significantly, so that ona regular project we can finish everything,including documentation and help, on sched-ule. After the analysis is done, we can com-plete a project using Rapid Developer in 35percent of the time it would take without RapidDeveloper. And, the benefit to the hospital isthat they have gained a powerful solution tomeet their business needs. What’s more is thatthe application works exactly the way theusers want it to. The combination of our excep-tional team of professionals, coupled with thepower of Rapid Developer, enabled us to deliv-er this problem-free and feature-rich applica-tion well within the required timeframe.”

Page 28: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Copyright Rational Software 2003 http://www.therationaledge.com/content/jul_03/m_organizing_mc.jsp

Organizing RUP SE projects

by Murray Cantor

Principal ConsultantRational SoftwareIBM Software Group

IBM RUP SE® is an extension of IBM Rational Unified Process,® or RUP,® for addressing systems development. RUP SE can be applied not only to software development and integration projects, but also to projects that include hardware development or acquisition and require specification of worker roles. This article provides a brief overview of how to extend generic RUP project management principles to RUP SE projects.

As a RUP extension, IBM RUP SE specifies that projects adhere to certain fundamental principles:

The project lifecycle model includes the four RUP phases-- Inception, Elaboration, Construction, and Transition -- and the RUP disciplines: business modeling; requirements; analysis and design; implementation; test and assessment; deployment; configuration and change management; project management; and environment. In fact, the familiar RUP diagram (see Figure 1) applies unchanged to RUP SE.

Project activities are not serialized; instead, teams evolve artifacts -- including the project plan -- in parallel, and detail them as their understanding of the project problem and solution increases.

The system is developed through a series of iterations, each satisfying more of the functional requirements. The focus is on finding and eliminating problems early, thereby reducing risk.

Page 29: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Organizing RUP SE projects

Figure 1: The Rational Unified Process lifecycle phases focus on risk reduction

Applying these principles to systems development raises two major issues:

● Team structure. How do we partition the staff into development teams and assign staff roles and responsibilities?

● Iteration planning. For large IBM RUP SE projects, how do we coordinate multiple development teams? When hardware is involved, how do we apply iterations to the hardware development?

This article addresses these issues, by showing how to apply RUP principles to RUP SE projects. It starts with a general discussion of the concepts underlying development project organization: basing the team organization around the project architecture, and the role of requirements analysis in partitioning effort. A brief overview of the RUP SE architecture framework follows that discussion.

Note that this article assumes an understanding of the fundamentals of IBM RUP SE:

● The RUP SE architecture framework 1

● UML (logical) subsystems 2

● Localities

● Requirements flowdown

For a more complete description of these concepts, see the IBM Rational whitepaper, "Rational Unified Process® for System Engineering, RUP SE 1.2,"3 and the RUP SE Rational Unified Process Plug-In.4

Page 30: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Organizing RUP SE projects

Partitioning strategies

One lesson learned from software development that carries over into system development is that there is a diseconomy of scale related to effort. As a project grows, more and more effort is spent on communication among the developers. As Fred Brooks5 points out, the number of conversations required has the potential to grow quadratically with the number of team members, and this growth can actually occur in poorly organized projects. Even in well-organized projects, the effort can grow to the 1.2 power with the number of staff.6 Hence individual productivity falls off as the size of the effort increases. A fundamental management task is to partition the effort so as to manage communication among the developers. One strategy is to partition the effort into teams and then minimize communication among those teams. This divide-and-conquer strategy can prevent a quadratic loss of individual productivity.

There are two approaches for partitioning the effort:

● Partitioning by requirements. Partition the system requirements into sets, and then assign teams to take responsibility for developing the parts of the system that deliver each of those sets.

● Partitioning by architecture. Assign teams to develop subsystems or subcomponents of the system. The requirements for these architectural elements are derived from the role they play in meeting the system requirements.

Let's explore each of these approaches.

Using requirements to partition effort

The requirements approach is common in systems development. The perceived advantage of partitioning by requirements is that it sharply decouples the effort, which simplifies the management problem. Different teams build different components that meet different requirements. These teams do their work in relative isolation, and, at the end of the project, bring their component to be integrated into the system.

This means of decoupling the development seems very attractive on the surface: There is little need for the teams to communicate. The belief is that each team can enhance productivity by using its own processes. In the end, what was originally a big, unmanageable project becomes a set of smaller, manageable programs.

However, this apparent simplicity often comes at a price. First, note that decomposing the system by requirements does, in the end, impose architecture: The components built in isolation become an implicit architecture. However, since this architecture was not explicitly constructed to address such quality issues as extensibility and maintainability, the resulting system tends to suffer from high service costs and may be incapable of evolving to meet changing mission needs.

Page 31: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Organizing RUP SE projects

A second problem with the requirements approach is that complications can arise when the team efforts are coupled at integration. Even though the components meet different requirements, they often need to communicate. Interface documents specify how the components access each other's services, and before they can proceed with their work in isolation, each team needs a fully documented, frozen interface document. For a new system, it is unlikely that such a document will be available early on, since such interface specifications result from a detailed design effort -- and that will not yet have occurred. So each team needs something it cannot have. In practice, this leads to premature attempts to freeze the interfaces in an Interface Control Document (ICD) and then interminable interface control meetings, as the shortfalls of the ICD become evident. Each team may go into an ICD meeting more focused on preserving its schedule than on finding a good technical solution.

Finally, a third problem is that modern systems require more integration than did past systems. This places more attention on internal reuse of hardware and software components, and on the ability to redeploy the components as you add more capability to the system. With all of these new demands, the architecture must be managed more closely than is possible when the effort is decomposed along requirements.

Using architecture to partition effort

This second strategy assumes that the system is partitioned into parts. In other words, it assumes an architecture whose elements were chosen to achieve an optimal design while balancing stakeholder needs. Teams are assigned to build each grouping of parts.

As we noted, the requirements method implies an architecture, but not a good one. The implied architecture does not account for many factors that need to be considered in specifying a quality architecture: maintainability, extensibility, supportability, overall responsiveness, and cost of ownership. Partitioning the effort along such a quality architecture results in some managerial complexity, but the extra effort results in a superior system.

In the following sections we will address the following challenges that result from allocating along architecture lines:

● Project management needs to understand the architecture to staff the team accurately.

● Since optimal architectures are more coupled than the implicit functional-allocation architectures resulting from requirements-based project staffing strategies, optimal architectures require more team communication.

● The architecture needs to evolve along with the team's understanding of the system design and implementation. Some team members must be responsible for maintaining the integrity of the architecture throughout development.

● The requirements that each team manages are derived from system requirements, not from an allocation of system requirements. We

Page 32: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Organizing RUP SE projects

will discuss the distinction between derived and allocated requirements below.

As we shall see below, in the end, each of these challenges sets the stage for optimizing project performance.

RUP SE architecture and requirements

As mentioned earlier, this article assumes some understanding of RUP SE artifacts and workflows. This section highlights RUP SE concepts pertinent to our discussion: derived requirements; maintaining traceability between system requirements and model element requirements; and separation of concerns between the logical, physical, and information aspects of the system architecture.

Probably the most important notion is that of derived requirements. For reasons alluded to above, generating requirements for analysis elements and their implementations (subsystems, localities, etc.) is not merely a matter of sorting the system requirements. Instead, these are newly derived requirements based on the roles and responsibilities of the elements and how they work together to meet the system requirements. The workflow for deriving the analysis model requirements is detailed in the RUP SE whitepaper and RUP SE Plug-In mentioned earlier.7

One of the RUP SE disciplines is to maintain traceability between the system requirements and the analysis model element requirements. This traceability is usually not a decomposition tree, as found in functionally decomposed architectures; it is a many-to-many mapping. As discussed below, maintaining the traceability between system and analysis model requirements is important for iteration planning.

Another important aspect of the RUP SE framework is that, at the analysis level, the decomposition is not hardware and software, but rather logical and physical, based on separation of concerns. Given the flexibility of modern technology hardware/software choices for product development (e.g., whether the logical capability is realized in VHDL, in an ASIC, in firmware driving a DSP, or code running in a CPU) are typically price/performance trade-offs that may change over the commercial life of the product. In fact, at a given point in time, a single analysis model may be realized in a variety of products, with different choices as to how to physically provide the logical capability, meeting different price/performance points. Further, given modern embedded system development environments, the distinction between hardware and software is somewhat blurred.

Also note that the software and hardware cannot be developed in isolation. The structure of physical software components (executables, etc.) is based on the physical architecture. The workflow for determining software components based on the logical and physical architecture is provided in the same RUP SE whitepaper and RUP SE plug-in mentioned earlier.8

RUP SE project organization

Page 33: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Organizing RUP SE projects

In setting the organization, it is useful to keep some fundamental principles in mind. Many of these principles apply to any large RUP-based development effort:

● Cover all of the RUP core workflows.

● Balance communications by assigning developers who need to communicate frequently to the same team whenever practical.

● Maintain architectural coherence through cross-team membership.

● Plan and deploy resources so that all teams function throughout the development lifecycle.

● Ensure that implementation includes integration of the separate components throughout the lifecycle.

Note that the last two principles follow from using RUP's iterative strategy. As mentioned above, the system development activities (requirements analysis, architecture, design, implementation, integration, and test) are not serialized in RUP projects; instead, they are carried out in parallel as the team delivers the system as a series of iterations with increasing capability.

We apply these principles at the system level, using associated RUP SE artifacts as the basis for defining team deliverables. As we shall see below, the analysis model serves as the architectural basis for organizing the effort.

Team structure

The RUP SE project organization consists of the following teams with overlapping membership.

● Enterprise/business/mission modeling team. Develops enterprise business/context models in order to set system context and derive system requirements.

● System analysis team. Builds the analysis model, including the UML subsystem and localities. This team also develops and maintains the derived requirements to the analysis, and it may build the analysis-level process and data models.

● Design and implementation teams. Responsible for the detailed design and implementation of components within a given viewpoint.

❍ Subsystem teams. Develop detailed class design and associated software modules for one or more subsystems.

❍ Locality teams. Develop detailed hardware specifications, design, and hardware components for one or more localities.

❍ Other teams. Might include data modeling and computer/human interaction.

Page 34: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Organizing RUP SE projects

● Build and integration team. Receives components developed by the development and implementation teams and builds system iterations.

● System test team. Plans, executes, and reports on system tests.

● Operation and maintenance team. Performs field delivery, tracks problems, prioritizes change requests, and delivers updates and patches.

● Project management team. Performs ongoing iteration planning, context management, and stakeholder communications.

Figure 2 shows the relationships among the various teams, with the arrows indicating communication paths between teams.

Figure 2: RUP SE project team organization

Let's explore in detail what each of these teams does.

Enterprise/business/mission modeling team

This team is responsible for establishing system requirements by defining the role that the system under development will play in the overall enterprise. This team applies flow methods to create system use cases, system services, and system supplemental requirements. Over the course of development, this team may update the enterprise model as the system specification evolves. In addition, the team serves as an authoritative source for dealing with requirement issues that emerge during development.

Page 35: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Organizing RUP SE projects

Ideally, this team should consist of business analysts and/or domain or process experts, along with some or all members of the system analysis team.

System analysis team

This team builds and maintains the system analysis model, including each of the views. In addition, this team carries out joint flowdown activities to derive the system use-case survey and services, survey of hosted services for each locality, and so on. This team looks after the integrity of the architecture, resolves interface issues, and addresses discovered needs for changes as the development evolves.

This team includes the lead architect(s) and some of the business analysts or domain experts. Over time, the lead architects of the design and development team, the build and integration team, and the test team will join the system analysis team.

Design and implementation teams

These teams create the modules that are integrated by the build and integration team. Once the architecture is reasonably stable, the elements of the analysis model are partitioned to the design and development teams. In particular, some teams (subsystem teams) are given one or more UML subsystem(s) to design and develop in code modules. Other teams (locality teams) are assigned localities to design; then they either develop these localities or acquire hardware modules for them.

The teams are organized around core competencies. Good logical and physical decompositions are generally aligned with technical competencies.

Subsystem teams. Note that the derivation of requirements for subsystems results in a clean set of specifications (requirements and interfaces) that permits subcontracting of work if necessary. Subcontracting is frequently necessary for large systems projects, and IBM RUP SE allows for independent development that maintains synchronization with other development plans. In many instances the subsystems are software based, but in others the work of UML subsystem development teams could be instantiated as firmware (e.g., EPROM-hosted code for an embedded controller) or hardwired into an ASIC. In still other cases, these teams may work with VHDL9 designers for a full hardware implementation.

In the case of UML subsystems, teams develop the subsystem context diagrams, and the detailed class design and code modules. They need to maintain coordination with the locality teams to ensure the code modules are suitable for compilation and deployment on the target hardware platforms. The subsystem teams conduct unit tests.

Locality teams. The locality teams take into account the derived supplemental requirements and surveys of hosted services to determine the needed hardware resources for deployment of the logical services. This team will then develop the hardware specifications and look after the

Page 36: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Organizing RUP SE projects

delivery of the needed components.

Build and integration team

The development teams develop modules for integration, and the build and integration team assembles them to create versions of the system. Note that IBM RUP SE employs iterative builds to manage content and technical risk. The build and integration team is responsible for the ongoing system integration effort and includes build environment experts. Each of the development teams has liaisons who are responsible for coordination with this team.

This team develops and maintains the make files, and -- working with the configuration management system -- creates the labeled software builds.

If hardware is not available, the development teams may need a simulated or scaffolded hardware platform to build interim system iterations. The build and integration team, working with the locality development teams, are responsible for providing those resources.

System test team

This team plans, executes, and reports on the system tests for each of the iterations.

Operation and maintenance team

This team tracks field experience, prioritizes change requests, and performs other similar activities.

Project management team

A key project management tool is the evolving iteration plan. As we will discuss below, IBM RUP SE iteration planning consists not only of creating and maintaining a system-level iteration plan, but also of deriving iteration plans for each of the teams. The project management team, consisting of the system development project manager, the lead architects/engineers, and the development, integration, and test team leads, is responsible for iteration plans and standard project management tasks such as staffing and status collection and reporting.

Staffing curve

RUP projects are typically not fully staffed at their onset. For larger projects, there is not enough work to keep everyone busy during the Inception phase; a core group of key staff should carry out Inception activities. Once they meet the Inception lifecycle goals, then managers should have enough confidence about the scope and required effort to fully staff the project.

The same is even more true for RUP SE projects. Note that creating the analysis model is primarily an Elaboration activity, so in principle, the

Page 37: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Organizing RUP SE projects

development teams should not be fully staffed until the end of Elaboration. In practice, however, managers can typically identify core competencies among staff members at the beginning of a project, so the Inception team can include development team leaders. As their understanding of the project deepens, managers can better estimate the optimum size for these teams and begin staffing up at the beginning of the Elaboration phase. By the end of Elaboration the teams should be fully staffed.

Key roles

Note that each team works throughout the entire project. Hence, each system team (project management, enterprise modeling, analysis, test, and build and integration) needs a lead responsible for maintaining the system perspective, delivering results, and coordinating with development team representatives.

Iteration planning

A key feature of the RUP lifecycle is that the project team builds the product through a series of iterations of increasing capability. At a system level, the standard iteration principle applies: Iteration content is described by an increasing set of use-case scenarios. Early iterations address technical risk; later iterations address content risk. Since there is a large body of literature describing the advantages of iterative development and ongoing iteration planning, we will not describe those benefits here.

In addition to the standard iteration planning concerns, system development involves other critical issues:

● Each development team needs its own derived iteration plan.

● Hardware delivery dates may not support iteration delivery.

The notion of derived iteration planning is not new, and with every RUP product, IBM Rational delivers a whitepaper by Maria Ericsson that discusses the principles of derived iteration planning. In the following section, we will see how to apply those principles to RUP SE projects.

Derived iteration plans

Recall that an iteration plan consists of specifying a sequence of partially functional versions of the system, which culminates in a completely functional system. So each iteration requires a set of test cases to verify the functionality and the hardware and software components required to provide that functionality. Each development team needs the means for determining what logical and physical modules it must develop to support each iteration, and the integration team needs to know what pieces to expect from the development teams for integration. The specification of these pieces for each iteration, based on the system iteration plan, results in a derived iteration plan for each of the teams.

Page 38: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Organizing RUP SE projects

The IBM RUP SE requirements flowdown workflow provides the means for deriving these team iteration plans. Recall that the use-case flowdown workflow produces surveys:

● Derived use-case surveys are produced for each of the UML systems, along with UML subsystem services that enable the use cases.

● A survey of hosted subsystem services is produced for each locality.

Each element of these surveys traces from one or more system use-case scenarios. Following the traceability, the subsystem iteration plan is derived from the system iteration plan by including the subsystem use-case scenarios traced from the included system use-case scenarios. Subsystem services define the interface requirements for a subsystem, and they must also be supported by the hardware realization of the localities. Following this traceability path results in the locality iteration plans. The integration of the modules that make up the realizations of the development team iterations must be included in the build and integration iteration plan so that each team's iteration plans may be derived from the system iteration plan.

If hardware is not yet available to pursue a locality team's derived iteration plan, project management is faced with an important decision: Is the benefit of risk reduction worth the extra investment in a simulated hardware platform? The derived iteration plan provides the necessary information to scope the simulator capabilities requires and conduct a cost/benefit analysis. Often the analysis points toward doing a simulation, in which case, the derived iteration plan provides the necessary requirements for it.

Resource balancing and smoothing

One challenge of managing a large-scale iterative development is to ensure that the effort required from each team to support an iteration is reasonable. Each of the teams' project managers needs to ensure that the team resources are adequate to support their derived iteration plans. In addition, managers need to ensure that staff is not idle during iterations that place only a "light" demand on their team. For example, it is generally not possible for a major subsystem to become fully functional in the first iteration; only some of the subsystem's functionality can be delivered that early on. The system project manager and subsystem lead typically negotiate and revise the system iteration plan to reflect what each of the subsystem teams can actually deliver within the iteration schedule. This activity, a standard project management responsibility, is the often called resource smoothing.

Resource smoothing is the responsibility of the project management team. They achieve the balancing by jointly performing the following workflow:

1. Set the system iteration plan using the usual RUP principles of addressing technical and content risk.

http://bronze.rational.com:8169/content/jul_03/m_organizing_mc.jsp (11 of 13) [7/11/2003 2:22:54 PM]

Page 39: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Organizing RUP SE projects

2. Use requirements traceability to set the derived iteration plans.

3. Assess team resources for the derived iteration plan and propose an achievable alternative plan (performed by each project manager for each team).

4. Unify alternative proposals by shifting functionality in the system iteration plan, and occasionally planning to shift resources between teams.

As in standard RUP projects, RUP SE project iteration plans are never frozen; they continually evolve as managers examine the results of the delivered iterations. Maintaining the system plan and derived iteration plans is an ongoing responsibility of the project management team.

A final word: How hard is this?

As we mentioned above, using architecture rather than requirements to organize systems development may, at first, seem overly complex and difficult to put into practice. However, the complexity of the organization scales with the complexity and size of the effort. For smaller, simpler projects, you can combine roles and teams so that the organization is more streamlined. Even for larger teams, the difficulty is more perceived than real. For these projects, with their inherent diseconomies of scale, the RUP SE management approach tames the nonlinear growth in interteam communications, yielding productivity increases. For very large projects, applying RUP SE at several levels simplifies the management process, and synchronizing the plans from these various levels validates overall system planning. At every level, the teams have clear roles, areas of concern, and responsibility. In the end, process adoption can proceed smoothly, allowing all teams to focus on the system, and not on the process itself.

Notes

1 The RUP SE architecture framework can be found in the RUP SE extension to the RUP available on RDN. This framework is further explained in the draft whitepaper "The Rational Unified Process for System Engineering 2.0" (in press) by Murray Cantor, and available from the author.

2 RUP SE relies on a way to express logical decomposition. UML 1.4 characterizes the elements of the logical decomposition of the system as subsystems. At this writing, the UML 2.0 drafts use different semantics to express the logical elements. When UML 2.0 is adopted, we will change the semantic representation of logical elements to reflect the current standard.

3 Rational Software TP 165, April, 2000, (http://www.rational.com/products/whitepapers/wprupsedeployment.jsp). A new whitepaper on RUP SE 2.0 is currently in production.

4 Available through Rational Developer Network (http://www.rational.net); authorization required.

Page 40: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Organizing RUP SE projects

5 Fred Brooks, The Mythical Man Month, Addison Wesley, 1997.

6 Barry Boehm et al., Software Cost Estimation with COCOMO II, Prentice Hall, 2000.

7 See notes 3 and 4.

8 See notes 3 and 4.

9 Very High Speed Integrated Circuit (VHSIC) Hardware Description Language

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2003 | Privacy/Legal Information

Page 41: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Copyright Rational Software 2003 http://www.therationaledge.com/content/jul_03/m_manager_pk.jsp

"A Project Manager's Guide to the RUP" (Chapter 14*)

from The Rational Unified Process Made Easy: A Practitioner's Guide to the RUP by Per Kroll and Philippe Kruchten(Addison-Wesley Object Technology Series, 2003)

In last month’s issue of The Rational Edge, we introduced you to this new book by Per Kroll and Philippe Kruchten with a chapter on the Elaboration phase of Rational Unified Process from Part II: The Lifecycle of a Rational Unified Process Project. This month, we provide another critical chapter from the book,* this time from Part IV: A Role-Based Guide to the Rational Unified Process.

When a software development project fails, the root cause is typically not the technology, but rather weak project management, the authors explain in this chapter. RUP can help managers navigate successfully through the difficulties inherent in such projects. Although it does not detail every aspect of project management, RUP does provide specific guidance on those aspects that relate to software development. Explaining the thinking that underlies that guidance, and highlighting essential RUP artifacts and activities, this chapter has much to offer both experienced and novice project managers.

*Chapter 14 posted in its entirety by permission from Addison-Wesley.

Chapter 14 pdf file (181 K)

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Page 42: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

PART IV

A R

OLE

-B

ASED

G

UIDE

TO

THE

R

ATIONAL

U

NIFIED

P

ROCESS

5449_CH14 Page 271 Thursday, March 13, 2003 11:07 AM

Page 43: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

5449_CH14 Page 272 Thursday, March 13, 2003 11:07 AM

Page 44: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

273

CHAPTER 14

A Project Manager’s Guide to the RUP

You are a project manager and are just about to use the RUP approach.This chapter is a guide to understanding your role in a softwaredevelopment project using the RUP. You will find a definition of the

role

of project manager and its interactions with other roles. We willintroduce some of the key artifacts that project managers will developand use. Finally we will review some of the key RUP

activities

projectmanagers are involved in.

The Mission of a Project Manager

There are many reasons why a project may fail or result in poor qual-ity. Many of them may be attributed to all kinds of technical reasons,and we are often pretty quick to do so; technology is a convenient andnameless scapegoat. But authors and consultants such as Roger Press-man who have witnessed many projects can testify: “If a post-mortem[assessment] were to be conducted for every project, it is very likelythat a consistent theme would be encountered:

Project management wasweak

.”

1

1. See Pressman 2001, p. 55.

5449_CH14 Page 273 Thursday, March 13, 2003 11:07 AM

Page 45: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

274

C

HAPTER

14 A P

ROJECT

M

ANAGER

S

G

UIDE

TO

THE

RUP

A Complex Role

“People, product, process, project—in that order,” is how the sameRoger Pressman defines the scope of software development projectmanagement.

People.

Software development is very human-intensive anddepends highly on the skills and the coordination of work amongpeople. Many of the activities of a project manager will rotatearound people and are focused mainly on the development team.

Product.

Nothing can be planned, studied, or produced if theobjectives and scope of the software to be developed are notclearly established, and although the project manager does notdefine all the details of the requirements, your role is to make surethat the objectives are set and that progress is tracked relative tothese objectives. This involves extensive communication with par-ties external to the development team, as well as with the devel-opment team.

If one person has to fully understand the process of developing software, it is the project manager.

Process.

If one person has to fully understand the process ofdeveloping software, it is the project manager. Project manage-ment is the embodiment of process. Having the RUP or not hav-ing it makes no difference if project management is not fullyprocess-literate and does not drive the process. The process, sup-ported by the right tools, is the common roadmap, understoodand used by all team members.

Project.

And then, once on the road, the project manager managesthe project itself, planning, controlling, monitoring, and correctingthe trajectory as often as necessary. The project manager isdynamically steering and adapting.

Throughout the lifecycle, the project man-ager should keep focused on the results.

In the very end, though, the manager of a software project will not bejudged on good efforts, good intentions, using the right process, orhaving done everything “by the book,” but on

results

. So throughoutthe lifecycle, the project manager should keep focused on the results,or any partial results that are getting the project closer to success. ARUP project is a collaborative effort of several parties, including theproject manager. And within the very dynamic context of iterative

5449_CH14 Page 274 Thursday, March 13, 2003 11:07 AM

Page 46: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

T

HE

M

ISSION

OF

A

P

ROJECT

M

ANAGER

275

development, the role of the project manager is more to “steer andadapt” than to “plan and track” (as often is the case in other domains).

So the role of the project manager is complex and requires many dif-ferent skills to be able to dynamically steer and adapt:

Technical skills

to understand the issues at hand—the technologiesand the choices to be made. Far too often, we run into organiza-tions that still believe that a project manager just managesresources (including people), and does not need to understand theproduct or the process. As a project manager, you do not need tobe a technical expert in all aspects, and you should rely on theright people to help you on the technical side (see the Chapter 16on architects and their relationship with the manager), but a goodlevel of understanding of the technical issues will still be neces-sary to achieve the best results.

Communication skills

to deal with many different stakeholdersand have an ability to jump from one style to another. For exam-ple, from personal communication (face-to-face, such as inter-views and meetings) to impersonal (status reports), from formal(customer reviews and audits) to informal (small group brain-storming, or just walking around the project to test the mood).

A Person or a Team?

It is likely thaton large

projects, morethan one per-son will play

the ProjectManager role.

We tend to think of

the

project manager as one person. And in mostsmall- to medium-sized projects (3 to 15 people), only one person willfulfill this role. But the RUP describes the project manager not as aperson, but as a

role

that a person will play, and it is likely that onlarge projects, more than one person will play this role. There will stillbe reason to have one clear project leader, but that person should notfeel the need to run

all

the RUP activities in the project managementdiscipline.

First, there can be a small team of people doing the project manage-ment. One can be focused on planning; another one can deal withsome of the key communication interfaces, with product managementor the customer; another one can follow internal progress. Note thatsome of this specialization is already acknowledged by the RUP,

5449_CH14 Page 275 Thursday, March 13, 2003 11:07 AM

Page 47: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

276

C

HAPTER

14 A P

ROJECT

M

ANAGER

S

G

UIDE

TO

THE

RUP

which defines more than one manager role; there are managers’ roleswith specialized expertise, for example, configuration manager,deployment manager, test manager, and process engineer.

Also, for larger software development organizations, say 25 people ormore, it is common to have the organization broken up into smallerteams, and each team lead will be delegated part of the project man-agement role, relative to one team. In other words, the project man-ager has delegated some of the routine management and will getsome “eyes and ears” all across the project.

Finally, in larger projects, some groups can be set up to handle some ofthe management activities in a more formal way and to support theproject manager:

• To monitor the project’s progress: a Project Review Authority(PRA) and a Change Control Board (CCB)

• To set up and improve the software process: a Software Engineer-ing Process Authority (SEPA, sometimes also called SEPG)

• To drive the definition and adoption of tools, a Software Engineer-ing Environment Authority (SEEA)

These groups are set up with people of the right expertise and author-ity, sometimes with full-time people, and they operate when neces-sary to support the management group or when dictated by theprocess.

Project Management

“Project management is the application of knowledge, skills, tools,and techniques to project activities in order to meet and exceed stake-holders’ needs and expectations from a project.”

2

Meeting or exceeding stakeholders’ expectations invariably involvesbalancing competing demands among

2. See PMI 2000.

5449_CH14 Page 276 Thursday, March 13, 2003 11:07 AM

Page 48: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

P

ROJECT

M

ANAGEMENT

277

• Scope, time, cost, and quality.• Stakeholders, internal and external, with different needs and

expectations.• Identified requirements (needs) and unidentified requirements

(expectations).

Scope of the Project Management Discipline in the RUP

The RUP delib-erately does

not

cover allaspects of

project man-agement, and it

remainsfocused on the

engineeringaspects.

An important warning is now due. The RUP deliberately does

not

cover all aspects of project management, and it remains focused onthe engineering aspects.

Despite what we wrote above about the first “P,”

People,

the projectmanagement discipline in the RUP does not cover many of the aspectsrelated to managing people—all the

human resources

management.So, in the RUP you will not find guidance on how to hire, train, com-pensate, evaluate, or discipline people.

Similarly, the RUP does not deal with

financial

issues, such as budget,allocation, accounting, or reporting. Nor does it deal with legal andcontractual issues, acquisition and sales, licensing, or subcontracting.Additionally, the RUP does not deal with some of the administrationissues associated with people, finances, and projects.

There is a wide range of practices around the world on these topics.And there is a wide body of knowledge accessible and not specifi-cally linked to software development.

One great source of information is the

Guide to the Project ManagementBody of Knowledge (PMBOK),

developed under the auspices of theProject Management Institute (PMI) and endorsed by IEEE as Stan-dard 1490-1998,

Adoption of the PMI Guide to PMBOK.

The RUP does, however, concentrate on the software-specific aspectsof project management, that is, the areas where the nature of softwarehas an impact, making it different. The activities that are not coveredby the RUP do take a significant amount of time and effort, and theyrequire some skills. So they should not be overlooked when establish-ing the schedule of the people managing a project.

5449_CH14 Page 277 Thursday, March 13, 2003 11:07 AM

Page 49: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

278

C

HAPTER

14 A P

ROJECT

M

ANAGER

S

G

UIDE

TO

THE

RUP

Software Development Plan (SDP)

It is hard to reduce software project management to a handful of reci-pes, but let us try to define the overall pattern—the good practice.

The best approaches we have found so far is for the project manager:

1. To express the project’s

plans

(the expectations as seen from theproject management) in the various areas: scope, time, cost, qual-ity, process.

2. To understand what could adversely affect these plans over time;that is, what are the

risks

if the project does not follow theseplans.

3. To monitor progress to see how the project stays aligned to theplan, using some objective metrics whenever possible.

4. To revise any of these plans if the project goes significantly off-course.

5. Finally, to learn from your mistakes, so that the organization willnot repeat them in the next iteration or the next project.

Consequently, the key artifact a project manager will focus on is a

Software Development Plan

, which is an umbrella artifact containingmany different plans, each one dealing with one management topic:

• Project plan and iteration plans (see Chapter 12)• Test plan• Configuration Management plan• Measurement plan• Risks• Documentation plan• The specific process the project will use—its development case

For better clarity, visibility, and accountability, the Software Develop-ment Plan may be one of the few formal artifacts of a project.

As the project unfolds over time, these plans are refined, corrected,and improved, as one may expect from iterative development; and toachieve this, other tactical artifacts are created. They are usually tak-ing a snapshot view of the project to allow concerted reasoning aboutsome tactical decision to be made:

5449_CH14 Page 278 Thursday, March 13, 2003 11:07 AM

Page 50: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

P

ROJECT

M

ANAGEMENT

279

• Review record (minutes)• Issues lists• Status assessment

One important aspect of the SDP is to define more precisely the pro-cess the project will use: This is the role of the

development case

described in Chapters 10 and 11. The project manager will set up andmaintain the right degree of formality, the “level of ceremony” asGrady Booch calls it, that is adequate for this project. And this devel-opment case will also evolve as the project unfolds, based on the les-sons learned at each iteration.

Iterative Development

This sounds like a leitmotiv in this book, but it is worth mentioningagain. In an

iterative development,

you do not plan once and thenmonitor the project and try to coerce it to conform to the plan at anycost. You plan, and then replan, as necessary, again and again. Youmay therefore end up in a different spot from where you had intendedto arrive in the very first plan, but you will find yourself at a betterspot, or a more modest but more realistic spot, which is better thannever arriving anywhere.

If you have never managed an iterative project, it can be daunting thefirst time.

3

Risks

To effectively manage iterative development, the second concept thebeginner RUP project manager must master and keep constantly inmind is that of

risk.

There are inherently many risks, of various mag-nitude and probability, that could affect a software project. Managinga software project is not a simple matter of blindly applying a set ofrecipes and templates to create wonderful plans engraved in stoneand then bringing them back to the team for execution. Managing aproject involves being constantly aware of new risks, new events,

3. See Kruchten 2000b.

Managing aproject involves

being con-stantly aware ofnew risks, new

events, situa-tions, and

changes thatmay affect the

project andreacting rap-idly to them.

5449_CH14 Page 279 Thursday, March 13, 2003 11:07 AM

Page 51: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

280

C

HAPTER

14 A P

ROJECT

M

ANAGER

S

G

UIDE TO THE RUP

situations, and changes that may affect the project and reacting rap-idly to them. The successful project manager is the one who is present,is curious, speaks to team members, inquires about technology, asks“why” and “how” and “why” again to identify new, unsuspectedrisks—and then applies the appropriate recipes to mitigate them.

Metrics

Another key word for the RUP project manager is metrics. To avoidbeing sidetracked by subjectivity or blinded by biases, experiences, orknowledge deficiencies, the project manager establishes some objec-tive criteria to monitor (more or less automatically) some aspects of theproject. A few measurements can be put in place to help you gathervariables such as expenditure, completion (how much of the function-ality is complete), testing coverage (how much have you tested), anddefects (found and fixed), as well as the trends over time. Other usefulmetrics involve changes over time: amount of scrap and rework, orrequirements churn, which can be tracked via a good ConfigurationManagement system. The smart project manager would want to auto-mate as much as possible the collection of these metrics to free moretime for activities that might require more human interaction.

Metrics, from the current project and from previous projects, are whatwill help the team develop estimates, and in particular workload esti-mates (see Chapter 12 on Planning). These estimates are a sharedresponsibility between the project manager and the rest of the team.The project manager cannot unilaterally impose them on a team.

Activities of a Project Manager

So what is it exactly that a project manager is expected to do, accord-ing to the RUP?

In the RUP you will find that the activities are grouped by theme:

• Activities to launch a new project• Activities to define and evolve all elements of the Software Devel-

opment Plan

5449_CH14 Page 280 Thursday, March 13, 2003 11:07 AM

Page 52: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

ACTIVITIES OF A PROJECT MANAGER 281

• Activities to start, run, and close a project, a phase, or an iteration• Activities to monitor a project

Launching a New Project

Based on an initial Vision Document, a project manager develops aninitial Business Case that contrasts the scope of the project (and itsexpected duration and cost) with the potential return. The Vision con-tains the gist of the requirements: what it is that you want to achieve.The Business Case articulates the rationale for doing this softwareproject. The Vision and the Business Case should be revisited manytimes until a project can be initiated and approved. It is never tooearly to start identifying risks, that is, any event that may adverselyaffect the project or make it fail. These risks will be the first thing theproject should focus on in the next iteration.

Developing the Software Development Plan

Depending on the scope and size of the project, a project manager willdevelop some or all of an SDP. The organization may have developedready-made templates that are more specific than the ones you willfind in the RUP, with large segments already prefilled.

There are two important parts of an SDP:

• Planning time and resources, in a project plan and a staffing plan(which we described more fully in Chapter 12).

• Specifying the process this project will use: artifacts to be devel-oped and level of ceremony and formality resulting in a develop-ment case (as we described in Chapter 10). This includes specificguides, style guides, and conventions to be used in the project.

Other plans dealing with Configuration Management, documenta-tion, testing, tools, and so on may have to be developed.

Starting and Closing Phases and Iteration

The project manager will plan in more detail the contents and objec-tives of phases and of iterations by specifying the success criteria that

5449_CH14 Page 281 Thursday, March 13, 2003 11:07 AM

Page 53: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

282 CHAPTER 14 A PROJECT MANAGER’S GUIDE TO THE RUP

will be used to evaluate the work at the concluding milestones. Theseactivities will require extensive interactions with all the team membersand cannot be done in an ivory tower. Each phase and iteration willneed to be properly staffed and activities allocated to team members.

As an iteration (or a phase with its major milestone) concludes, theproject manager will assess the results of the iteration or phase, andcompare them with the expected results specified in the SDP. Discrep-ancies will trigger revision of the plans or rescoping the project differ-ently. The process itself may be improved.

For example, looking at the risks previously identified (“Integration oftechnology X with middleware Y”), you assess that you have indeedsuccessfully integrated them in a prototype and tested, therefore elim-inating this as a risk.

Monitoring the Project

As an ongoing activity, the project manager will use some indicatorsto monitor progress and compare it to the plans. This can take variouslevels of formality and use a combination of metrics (such as defectsdiscovery and resolution rates) and reviews (informal and formal) toassess conformance to plans and quality of the product.

For example, if the defects discovery rate drops dramatically, this is asignal (a) that the testing effort is stalling, (b) that the new builds arenot bringing any new substantial functionality, or (c) simply that theproduct is becoming stable.

There is at least one assessment per iteration, and somewhat more for-mality at the closing of a phase, as these major milestones mayinvolve some strategic decision about the pursuit of the project. Theyplace you at a point where you can consider cancellation or a signifi-cant rescoping of the project.

Finding Your Way in the RUP

To get started with the RUP, it is vital that a project manager thor-oughly understands the concept of iterative development and the

5449_CH14 Page 282 Thursday, March 13, 2003 11:07 AM

Page 54: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

CONCLUSION 283

RUP lifecycle (phases and iterations). Then there are some key con-cepts: risk management, quality, metrics, and how the process isdescribed (Roles, Activities, and Artifacts). As necessary, check theRUP Glossary for definitions. If you are familiar with project manage-ment, but not specifically with iterative software projects, it is in thearea of planning that you may have the most to learn, and especiallyplanning an iterative project (as described in Chapter 12).

You can enter the RUP via the Role: Project Manager and reach thevarious activities that define this role. Alternatively, you may startfrom the Artifact: Software Development Plan (its template and someexamples). From there navigate to the various activities that contrib-ute to its development, or more precisely, to the development of thevarious plans it contains. This will lead you to the more specializedroles of Configuration Manager, Test Manager, and so on.

On small projects, or in a small software development organization, itis highly likely that the same person who plays the role of projectmanager will also be the process engineer, defining the project’sdevelopment case, facilitating the enactment of the process, and get-ting involved in process improvement activities as a result of iteration(or project) assessment. Then see the Role: Process Engineer.

Do not forget that a project manager will interact with many otherroles and participate in their activities, in some form or another. Inparticular, the project manager will have to interact and coordinatealmost daily with the Architect(s) and get involved in variousreviews.

Conclusion

Following a defined process, such as an instance of the RUP, is not aneasy way for a project manager to abdicate responsibilities and forgocommon sense. The job does not consist in blindly developing anyand all artifacts described in the RUP, hoping that the right thing willhappen. The tasks do not consist of allocating all activities of the RUPto the team members and then saying, “Oh, I simply followed theRUP! I do not understand why we failed.” Recall that you should

5449_CH14 Page 283 Thursday, March 13, 2003 11:07 AM

Page 55: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

284 CHAPTER 14 A PROJECT MANAGER’S GUIDE TO THE RUP

select out of the RUP the right set of artifacts, the right methods andtechniques, and adapt them to your context. You must understand theproject and the product deeply, and work with the architect, the ana-lysts, the designers, and the testers.

You will be judged on con-crete results, not on the means you have put in place.

You will be judged on concrete results, not on the means you have putin place. Your role is to steer and continuously adapt the process asthe project unfolds, deciding which artifacts and which activities arebringing concrete results. To do this, you must get deeply involved inthe project life, from a technical standpoint, to understand rapidly thekey decisions to be made, to seize opportunities to get certain resultsfaster, to manage the scope of the project, to “size” the process. This isdone efficiently only in a collaborative manner, not by imposing arigid bureaucracy, setting up a distance between project managementand the rest of the team.

Remember also that all the management activities not described in theRUP are also very important. You are not managing machines; you arenot managing behaviors; you are managing people. Just runningaround and telling them what to do and pointing at the RUP won’tdo. You are setting goals and establishing a software developmentculture, a culture of collaboration and trust. And this is done througha high and constant level of communication.

So, to summarize:

• The project manager is not an innocent bystander but is part of ateam and works collaboratively with this team.

• The project manager is responsible for the development and mod-ification of the Software Development Plan.

• The plan is based on a configured process, adapted to fit the con-text of the project.

• The project manager is responsible for making the appropriatetradeoffs to manage the scope of the project and of each iteration.

• The project manager is constantly focused on risks—any risks—and how to mitigate them, should they prove to be in the way ofsuccess.

• The project manager keeps the focus on real results, not on inter-mediate and sometimes abstract artifacts.

5449_CH14 Page 284 Thursday, March 13, 2003 11:07 AM

Page 56: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

RESOURCES FOR THE PROJECT MANAGER 285

Resources for the Project Manager

Further Reading

Murray Cantor. Software Leadership: A Guide to Successful SoftwareDevelopment. Boston: Addison-Wesley, 2002.

Tom Gilb. Principles of Software Engineering Management. Reading, MA:Addison-Wesley, 1988.

James A. Highsmith. Adaptive Software Development: A CollaborativeApproach to Managing Complex Systems. New York: Dorset House Pub-lishing, 2000.

IEEE Standard 1490-1998. “Adoption of the PMI Guide to the ProjectManagement Body of Knowledge.” New York: IEEE, 1998.

IEEE Standard 1058-1998. “Standard for Software Project Manage-ment Plans.” New York: IEEE, 1998.

Steve McConnell. Software Project Survival Guide. Redmond, WA:Microsoft Press, 1997.

Fergus O’Connell. How to Run Successful Projects. Upper Saddle River,NJ: Prentice-Hall, 1994.

PMI. Guide to the Project Management Body of Knowledge (PMBOKGuide). William R. Duncan (editor). Newton Square, PA: Project Man-agement Institute (PMI), 2000.

On the Web

Look in The Rational Edge (www.therationaledge.com) for manage-ment articles, in particular the various lively pieces by Joe Marasco,on his experience managing software projects. Check the “Franklin’sKite” section.

And see Philippe Kruchten, “From Waterfall to Iterative Develop-ment—A Tough Transition for Project Managers,” in The Rational Edge,December 2000. http://www.therationaledge.com/content/dec_00/m_iterative.html.

5449_CH14 Page 285 Thursday, March 13, 2003 11:07 AM

Page 57: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

286 CHAPTER 14 A PROJECT MANAGER’S GUIDE TO THE RUP

Training Resources

The following training course has been developed to support RUP users;it is delivered by IBM Software and its partners (see www.rational.comfor more information):

• Mastering the Management of Iterative Development (three days).

5449_CH14 Page 286 Thursday, March 13, 2003 11:07 AM

Page 58: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Copyright Rational Software 2003 http://www.therationaledge.com/content/jul_03/r_modernizing_en.jsp

Book review

Modernizing Legacy Systems by Robert Seacord, Daniel Plakosh, and Grace A. Lewis

Addison-Wesley, 2003

ISBN: 0-321-118847Cover price: US$40.49352 pages

This useful book describes the process and technologies involved in updating a legacy system. Chapters 1 through 4 do a very good job of describing the problems inherent in working with legacy systems, which the authors define simply as having "code that was written yesterday." I couldn't agree more with their perspective. Once code is written, it needs to be maintained, updated, and managed, whether it is COBOL, Fortran, PowerBuilder, or Java. Essentially, all code becomes legacy code once it is written.

Early in the book, the authors present a Unified Modeling Language (UML) activity diagram to describe their proposed process for updating legacy systems. They then open each chapter by using that diagram to depict where they are in the process -- from Portfolio analysis completed (modernization candidates selected) to Modernization plan defined. The book defines ten main steps and two checkpoints for completing the process, including decision points to determine whether modernization is the correct choice.

To describe the process in detail, the authors follow a legacy system modernization project over the course of the book. As process experts from the Software Engineering Institute (SEI), they actually consulted on this project, which was to update and Web-enable a primarily COBOL retail supply system. The case study helps readers understand the flow of the process the authors recommend and brings reality to their suggestions, although at times they abandon the "story" and go into overly minute detail, discussing every process option they could have chosen.

I would have liked them to focus more deeply on the option they did choose, and to discuss at greater length how to be successful with that option (or any other). For example, in their discussion of requirements in Chapter 4, the authors do a good job of describing where to get requirements from, but they never discuss good processes and techniques

http://bronze.rational.com:8169/content/jul_03/r_modernizing_en.jsp (1 of 2) [7/11/2003 2:24:16 PM]

Page 59: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Book Review - Modernizing Legacy Systems

for gathering those requirements. In most requirements-related situations I have been involved in, deciding where to go for requirements wasn't much of a challenge, but figuring out how to elicit them was.

The book does a good job of describing the different technologies the project used as well as others available for modernization efforts -- Java/J2EE, Web Services, wrapper code, and different packaged systems -- providing an overview of each technology as well as customized ways to write integrations from the legacy systems to modern ones. The authors also discuss screen scrape technologies as well as screen rewrites, but focus mainly on modernizing the software. This was a little disappointing: Based on the title, I expected to see discussions of all the hardware, software, development processes, and additional technologies involved in modernizing systems.

The book also falls down a bit when it comes to describing the actual process for implementation. It focuses primarily on understanding what you have, designing for change, and planning how to get where you want to go, but it doesn't go far enough into what you actually have to do to get there.

Overall, however, I learned a lot from these authors, who confirmed many of my beliefs about the importance of modernizing legacy systems and the best approaches to use. They provided good strategies for understanding systems that are already in place, starting with the workflow they follow throughout the book. Their examples on modeling, requirements management, and the process they followed were also helpful.

I would recommend this book for people who need a better understanding of the processes and technology decisions you must make when building software systems. For most of us in the industry, no matter what we are working on, there's probably a legacy system involved in some way.

-Eric Naiburg Rational SoftwareIBM Software Group

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2003 | Privacy/Legal Information

Page 60: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Book review

Component-Based Product Line Engineering with UML by Colin Atkinson, Joachim Bayer, Christian Bunse, Erik Kamsties, Oliver Laitenberger, Roland Laqua, Dirk Muthig, Barbara Paech, Jurgen Wust, and Jorg Zettel.

Addison-Wesley, 2001

ISBN: 0-20-173791-4Cover price: US$50.00464 pages

First, a warning: The title of this book is misleading. If you're hoping for mainstream, tried-and-true process guidance, using standard notations supported by commercially available tools, then this is the wrong book. The following may help set your expectations for this book:

● The authors are researchers describing a theoretical approach (the KobrA methodology); they are not practitioners deriving best practices proven on real projects.

● The authors' proposed notation is not standard Unified Modeling Language, or UML,™ but rather a custom notation loosely based on UML. There is neither explicit support for this notation by commercially available tools nor any suggestion that this notation will ever be part of standard UML.

Don't get me wrong, though. This book is a worthwhile read, as it provides many interesting and important insights into component-based and product-line engineering concerns. The authors point out genuine issues and ambiguities in the current UML, and some of their proposed solutions are intriguing.

The KobrA (derived from Komponentenbasierte Anwendungsentwicklung, which is German for "component-based application development") methodology is based on a number of principles. Most of these -- such as parsimony, encapsulation, and locality -- are restatements of generally accepted component software engineering principles for keeping things simple, separating concerns, and minimizing coupling. Where KobrA is a bit more radical is in its application of what the authors call uniformity.

The uniformity principle reflects a major goal of KobrA: "to avoid the

Copyright Rational Software 2003 http://www.therationaledge.com/content/jul_03/r_component_bm.jsp

Page 61: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Book Review - Component-Based Product Line Engineering with UML

feature overload found in many other methods." Every behavior-rich element of a system is a Komponent. Thus, the UML's subsystems, packages, behavior-rich classes, components, and even whole systems, all become Komponents. Since the term is so broad, KobrA uses qualifiers to distinguish between different kinds of Komponents:

● Instance vs. type

● Specification vs. realization

I find this uniformity quite attractive because it allows a system to be described as a Komponent and decomposed recursively into a hierarchy of Komponents. All behavior is described in terms of interacting Komponents, so you don't have to decide up front if the behavioral element is a class, a subsystem, or a package. The terminology is simple and consistent.

Nevertheless, I see great value in the software industry adhering to standardized notations to achieve effective communication between developers across projects and organizations. Although KobrA is a legitimate challenge to the UML that aims at greater simplicity and unity, the working group responsible for the UML is also driven by other important concerns, such as how to maintain backward compatibility and how to extend the UML's modeling capabilities. When they release the new UML 2.0, we will see how well they have balanced these concerns and whether they were able to address some of the concerns about complexity that led Atkinson et al. to develop KobrA.

The rest of their book is a broad mix of software engineering guidance for component-based development, which they illustrate using KobrA notation. It is weak in some areas, such as bridging requirements and design, and rich in others, such as component specification. Some of the guidance is interesting and instructive, such as the thirty-six pages of mathematically rigorous description of component configuration management. Much of the guidance is theoretical, without supporting tools or reports of commercial experience -- so proceed with care.

- Bruce MacIsaacSenior Software DeveloperRational SoftwareIBM Software Group

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2003 | Privacy/Legal Information

Page 62: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Copyright Rational Software 2003 http://www.therationaledge.com/content/jul_03/t_ccperformance_tm.jsp

Principles and techniques for analyzing and improving IBM Rational ClearCase performance

Part I: Performance analysis and monitoring

by Tom Milligan Specialist, Software Configuration

Management (SCM) Rational SoftwareIBM Software Group

with

Jack WilberIBM Rational Consultant

On any given day, how many times does your development team check out or check in artifacts from your IBM Rational® ClearCase® versioned object bases (VOBs)? How many builds do they perform? If you pause to consider how many Rational ClearCase operations your team performs over the lifetime of a project, it is easy to see how even a small improvement in the speed of these operations can save a significant amount of time.

Over the past eight years, I have worked with development teams of all sizes and geographic distributions, helping them use Rational ClearCase more effectively and efficiently for software configuration management (SCM). I think it is fair to say that all of them appreciated any efforts that would enable them to get more work accomplished in a day, and ultimately complete projects faster. Whether you are a Rational ClearCase administrator facing a performance problem, or you are just looking to improve performance to give your team's productivity a boost, it helps to have a plan.

This article, Part I of a series on principles and techniques for improving IBM Rational ClearCase performance, provides an overview of the principles of performance assessment and advice on how to apply them in

http://bronze.rational.com:8169/content/jul_03/t_ccperformance_tm.jsp (1 of 11) [7/14/2003 9:59:46 AM]

Page 63: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Principles and techniques for analyzing and improving IBM Rational ClearCase performance

a Rational ClearCase environment. It presents an approach that I have found useful in diagnosing performance issues and arriving at a solution,1 and uses a case study to illustrate this approach.

In an upcoming issue of The Rational Edge, Part II of this series will discuss how to use specific tools and practices to assess and improve the performance of IBM Rational ClearCase in your organization.

Getting started

When I address a performance problem, I start by gathering general information. I try to identify characteristics of the problem and determine how the problem manifested itself. Performance issues can be classified into two broad categories:

● Issues that are suddenly serious.

● Issues that gradually worsen over time.

Slowdowns that have a sudden onset are usually easier to diagnose and fix, as they are often related to a recent change in the IBM Rational ClearCase operating environment. Performance issues that evolve over a long period of time -- sometimes a year or more -- are more difficult to resolve.

In many ways, the questions you ask to diagnose a performance problem are similar to those for tracking down a bug in an application, or those a doctor might ask a patient to locate the source of a pain. Is the problem repeatable or transient? Is it periodic? Does it happen at certain times of day? Is it associated with a specific command or action? For example, with IBM Rational ClearCase, does the problem only happen when a build is performed using clearmake or some other tool? And, as with programming bugs, the performance issues that you can reproduce easily -- such as those associated with specific commands -- are easier to deal with. Intermittent problems are, by nature, more challenging.

Once you have a better understanding of how the problem manifests itself, you can start digging deeper to determine what exactly is happening in the various systems that IBM Rational ClearCase relies on.

First principle of performance analysis and monitoring

Systems are a loose hierarchy of interdependent resources2:

● Memory

● CPUs

● Disk controllers

● Disks

Page 64: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Principles and techniques for analyzing and improving IBM Rational ClearCase performance

● Networks

● Operating system

● Database (in this case IBM Rational ClearCase)

● Applications

● Network resources (e.g., domain controllers, etc.)

The first principle of performance analysis is that, in most cases, poor performance results from the exhaustion of one or more of these resources. As I investigate the usage of these resources in an IBM Rational ClearCase environment, I look first for obvious pathological symptoms and configurations -- that is, things that just don't belong. As an example, I recently was looking into a performance problem at a customer site. A quick check of the view host revealed that it was running 192 Oracle processes in addition to its Rational ClearCase duties. Whether that was the cause of the performance problem was not immediately obvious, but it clearly pointed to a need to assess whether the resources on the machine were adequate to support that many memory intensive processes.

In fact, that leads to another principle of performance analysis: Beware of jumping to conclusions. Often one problem will mask a less obvious issue that is the real cause of the problem. Also, be careful not to let someone lead you to a conclusion if he or she has a notion ahead of time about what is causing the problem. It's important to recognize that this notion is just a hunch and may not really be the explanation for the problem.

In performance analysis, I often think of a quote by physicist Richard Feynman: "The first principle is that you must not fool yourself, and you are the easiest person to fool." Essentially, I remind myself not to fall into the trap of believing that the first thing that looks wrong is really the primary problem.

A layered approach to investigation

Tackling an IBM Rational ClearCase performance problem can be a complex task. I find it a great help to partition the problem into three levels that comprise a "performance stack," as shown in Figure 1. At the lowest level are the operating system and hardware, such as memory, processors, and disks. Above that are IBM Rational ClearCase tunable parameters, such as cache size. At the highest level are applications. In Rational ClearCase, the application space includes scripts that perform Rational ClearCase operations, and Rational ClearCase triggers that execute automatically before or after a Rational ClearCase operation.

Page 65: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Principles and techniques for analyzing and improving IBM Rational ClearCase performance

Figure 1: IBM Rational ClearCase performance stack

In my experience -- and barring any pathological situation -- as you move up each level in the performance stack, you can expect the performance payback from your efforts to increase by an order of magnitude. If you spend a week tweaking and honing parameters in the operating system kernel, you might see some performance gains. But if you spend some time adjusting the IBM Rational ClearCase caching parameters as a heuristic, you'll see about a tenfold performance gain compared to the kernel tweaks. When you move further up and make improvements at the application layer, your performance gains will be about two orders of magnitude greater than those garnered from your lowest-level efforts. If you can optimize scripts and triggers, or eliminate them altogether, there are potentially huge paybacks. In Part II of this series, I'll talk more about how to optimize the application layer to improve performance.

With that in mind, you may be tempted to look first at the application layer. But as a matter of principle, when I do a performance analysis, I start at the bottom of the stack. I instrument and measure first at the OS and hardware level, and I look for pathological situations. Then I move up into the tunable database parameters, and I look at the application level last. There are a number of reasons for this order of investigation. First, it is really easy to look at the OS and hardware to see if there is something out of place going on. There are very basic tools you can use that are easy and very quick to run, and anything out of the ordinary tends to jump right out at you -- such as the 192 Oracle processes, for example. Similarly, at the next level up, IBM Rational ClearCase provides utilities that will show you its cache hit rates and let you tune the caches. These utilities are also very simple to use.

I look at the application layer last because of the complexities involved. This layer is more complex technically because it has multiple intertwined

Page 66: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Principles and techniques for analyzing and improving IBM Rational ClearCase performance

pieces. It also tends to be more complex politically because scripts and triggers usually have owners who created them for a reason and might not approach problem-solving the same way you do. Some become defensive if there's a hint they've done something wrong -- but often there is nothing "wrong"; it is just that what they have done is, by nature, slow.

Another reason for starting at the lowest level is simply due diligence. You do need to verify the fundamental operations of the system. Although it is where I start, I don't necessarily spend a lot of time there -- it's not where you get the most bang for your buck. I don't spend a lot of time with the IBM Rational ClearCase tunable parameters, either. It is usually a very quick exercise to examine the caches, adjust the parameters, and move on.

If you were to start at the top, you might tweak on triggers and scripts for a month, and never get to the fact that you are out of memory. If the system is out of memory, then that is issue number one. You should add more -- it is a fast and easy fix. By getting the lower two layers out of the way first, it gives you time to deal with the application layer. If you have enough time to optimize -- or even eliminate -- the application layer, then that's where you will have the greatest impact on improving performance.

Iterate, iterate, iterate

Performance tuning is an iterative process:

1. Instrument and measure.

2. Look at the data. Find where the current bottleneck appears to be.

3. Fix the problem.

4. Repeat.

You can keep following this cycle indefinitely, but eventually you'll come to a point of diminishing returns. Once you find yourself tweaking the kernel or looking up esoteric registry settings in the Microsoft knowledge base, you are probably at a good place to stop, because you are not likely to get a big return on your investment of time.

As you iterate, keep in mind the hierarchical nature of performance tuning. Remember that memory rules all. Symptoms of a memory shortage include a disk, processor, or network that appears to be overloaded. For example, when a system doesn't have enough memory, it will start paging data out to disk frequently. Once it starts doing that, the processor is burdened because it controls that paging, and the disk is working overtime to store and retrieve all those pages of memory. Adding more processing power or faster disks may help a little, but it will not address the root cause of the problem. Check for and fix memory shortages first, and then look at the other things.

Where to look

Page 67: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Principles and techniques for analyzing and improving IBM Rational ClearCase performance

IBM Rational ClearCase is a distributed application. Its operations involve multiple host computers as well as several common network resources. For the purposes of solving a performance issue, I like to think of the Rational ClearCase world as a triangle whose vertices are the VOB host (machine running the vob_server process), the view host (machine running the view_server process), and the client(see Figure 2). When I undertake a performance analysis, I inspect each vertex on the triangle. I check the performance stack on each of those hosts, make sure that each has enough memory and other low-level resources, and look for abnormal situations.

Figure 2: The IBM Rational ClearCase environment

VOB host

In an IBM Rational ClearCase community, the permanent repository of software artifacts consists of one or more VOBs, which are located on one or more VOB hosts.

VOB servers are especially sensitive to memory, because of the performance benefits of caching the VOB database. With more memory, the VOB server can hold more of the database in memory. As a result, it will have to access data from the disk less often, thereby avoiding a process that is thousands of times slower than memory access. For the VOB host, the IBM Rational ClearCase Administrator's Guide recommends a minimum of 128 MB of memory, or half the size of all the VOB databases the host will support, whichever is greater. Heed the advice of the Administrator's Guide: "Adequate physical memory is the most important factor in VOB performance; increasing the size of a VOB host's main memory is the easiest (and most cost-effective) way to make VOB access faster and to increase the number of concurrent users without degrading performance."

Typically, there aren't many IBM Rational ClearCase tunable parameters on the VOB host. There are settings you can use to control the number of server processes, but this function is rarely needed. There are other locking (lockmgr) parameters you can change if you notice errors in the

Page 68: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Principles and techniques for analyzing and improving IBM Rational ClearCase performance

Rational ClearCase log. In that case, consult the Rational ClearCase documentation or call IBM Rational technical support, and they will walk you through what you need to do.

View host

A view server manages activity in a particular Rational ClearCase view. The view server, in practice, should not run on the same physical machine as a VOB server. In some cases, the view server and client can run on the same box, depending on the configuration.

As with the VOB host, the first areas to check are the fundamentals -- memory, other processes running, and so on. But a view server has more Rational ClearCase parameters that can be adjusted. Views have caches associated with them, and you can increase the size of those caches to improve performance.

Client

I've been to some customer sites where the VOB host was doing great and the view host was doing great, but the client machines were woefully low on memory. The users complained about build problems because the compiler they were using was consuming all the available resources on the client. So if your check-out and check-in operations are just fine, but builds are slow, the client machines are one good place to look. The VOB host is another, because builds, especially clearmake builds, stress the VOB server for longer periods of time than check-out or check-in operations. As usual, check the OS and hardware level first. Also, if the user is working with dynamic views, the client machine will have MVFS (multiversion file system) caches that you can increase to improve performance.3

I'll talk in more detail about how to check resources and tune IBM Rational ClearCase in Part II of this series.

Shared network resources

Figure 2 shows a cloud of shared network resources that are also very important to IBM Rational ClearCase performance. These resources include domain controllers, NIS servers, name servers, registry servers, and license servers. Rational ClearCase must authenticate users before it allows operations. If the connection to the shared resources that are required for this authentication is slow, then user authentication in Rational ClearCase will be slow. The registry server and license server are fairly lightweight and are often run on the VOB host, so connectivity to these resources is usually not an issue.

When you're trying to save time, don't be latent

The edges of the triangle in Figure 2 are important as well. They represent the connectivity between the VOB host, view host, and client. In an IBM Rational ClearCase environment, not all network performance metrics are

Page 69: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Principles and techniques for analyzing and improving IBM Rational ClearCase performance

created equal. Network latency -- time it takes data to arrive at its destination -- has a much greater impact on Rational ClearCase performance than network throughput, the amount of data that can be sent across the network within a given timeframe. That is because in most cases, Rational ClearCase is not moving enormous files around. What it is doing is making a lot of remote procedure calls, or RPCs.

As a quick review, an RPC is a particular type of message that functions like a subroutine call between two processes that can be running on different machines. When a client process calls a subroutine on a server, RPC data, including arguments to the subroutine, are sent over a lower-level protocol such as TCP or UDP. The server receives the RPC, executes appropriate code, and responds to the client. Then the client receives the response and continues processing. RPCs are synchronous; that is, the client does not continue processing until it receives the response. It is important to note that there is a call and a return -- every RPC is a two-way street. If it takes 10 ms (milliseconds) for an RPC to flow from the client to the server, then the total RPC "travel-time" is 20 ms, plus processing time.

In a typical IBM Rational ClearCase transaction, either the MVFS or a client will send an RPC to the view server. The view server, in turn, calls an RPC on the VOB server. The response must first come back to the view server, and then a second response is sent back to the client.

Figure 3: Remote procedure calls in a typicalIBM Rational ClearCase transaction

This process has two layers of RPCs, each with a call and a response. If you have network latency of 10 ms between each of the machines, then this particular transaction will require 40 ms. Although that may not seem like much time, it quickly adds up. A check-out operation may involve more than 200 RPCs, as IBM Rational ClearCase authenticates the user, locates the VOB, locates the view, and so on. So in this case, even with relatively good 10 ms latency, over the course of the entire operation, Rational ClearCase can spend more than a second waiting for data to arrive through the network.

Latency increases with every "hop" -- or router -- that data must traverse en route from its source to its destination. Each router must process a packet to determine its destination, and that processing takes time. So, the fewer hops, the better. Remember, with Rational ClearCase performance tuning, it is latency, rather than bandwidth, that really matters. You might have a network with gigabit throughput capabilities,

Page 70: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Principles and techniques for analyzing and improving IBM Rational ClearCase performance

but if an RPC call has to travel through a dozen routers, than you will be paying a significant performance penalty.

Part II of this article series will provide details on how to assess network latency and other network issues.

A case study

To illustrate some of the principles of IBM Rational ClearCase performance analysis and tuning we have just discussed, let's look at a real-life case study. I was working with a customer that had been using Rational ClearCase for about a year. They had implemented their own process, which included additional tracking and authorization -- they were not using UCM (Unified Change Management4). The VOBs were all located on a single Solaris server, which had four processors and four GB of memory. The view server -- which they also used to perform builds -- was on a separate, but essentially identical, machine. Even with these fairly high-powered machines, the customer was complaining of poor performance during check-out and check-in operations.

Level 1: OS / Hardware

When we talked to the system administrators, they thought that the VOB and view servers were running just fine. They believed that IBM Rational ClearCase was the problem. So we started with the performance stack, moving from the bottom to the top. We did our initial analysis at the bottom layer, looking for pathological things -- such as odd configurations or strange processes running on the machines -- as well as the standard sweep of resource metrics -- memory, processor, disk, and so on. We determined that the VOB host was fine but the view host was not.

As it turned out, this was the customer that had 192 Oracle processes running on the view host! These processes were consuming 12 GB of virtual memory on a system with only 4 GB of physical memory. Of course, some of the memory used by each process was shared memory, reducing the total memory used by these processes to something less than 12 GB -- but that was still way more than the system had. Our observations quickly revealed that the system was out of memory, and that the processor utilization was very high-- the processor had zero idle time. But the core issue wasn't processing power; it was memory.

We recommended that the customer remove the Oracle processes from the view server machine. After that, we suggested adding memory if it was still needed, and changing their user interaction model, so that they were not compiling on the view host. Because the customer had not noticed the performance problems before installing Rational ClearCase (along with some application layer scripts they had developed), they hesitated to make these changes, because they still suspected that Rational ClearCase, not their systems, was causing the problem.

Level 2: Rational ClearCase tunable parameters

Page 71: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Principles and techniques for analyzing and improving IBM Rational ClearCase performance

Our next step was to move up the performance stack, looking at ways to tune Rational ClearCase to improve performance. We determined that the MVFS and view caches were undersized. Our second recommendation was to increase the size of these caches, but we warned the customer of the inherent danger in this step. Allocating larger caches would make the memory shortfall greater, because we were essentially setting aside memory that the system already lacked. We went ahead, knowing that we were not addressing the memory issue. Performance did improve, but not substantially.

Level 3: The application space

Our next step was to examine the application layer. The customer had implemented process scripts that they wrapped around check-out and check-in operations to perform some additional authentication and logging. We instrumented those scripts to find out where the time was being spent, and then we ran them periodically throughout the day. The measurements revealed that the actual Rational ClearCase check-out and check-in times averaged 0.5 seconds, even on a view host that was completely out of memory. The rest of the scripts' processing time clocked in at 17.4 seconds. The logging and other functions performed in the application layer were taking roughly thirty-five times longer than the Rational ClearCase functions. And this was a fairly consistent ratio. At different times of the day, the Rational ClearCase times would be up to .7 seconds, but the script times were then close to 25 seconds. And that's why people were complaining.

To summarize, we started at the bottom of the performance stack. At the hardware level, you don't often get a lot of payback, but looking for pathological indicators is something you need to do. We quickly saw the Oracle processes, noticed that the machine was also being used to compile, and determined that the view host was very low on memory. Next, we looked at the IBM Rational ClearCase tunable parameters, and then produced a noticeable -- but not huge -- improvement by adjusting them. The real impact was in the application layer. By rapidly examining the first two layers, we had enough time to fully analyze the application space, and we found that there was a lot of room for improvement.

The customer examined the functionality they had achieved with the application layer scripts, and they found that some of the functionality was already being provided by IBM Rational ClearCase. In addition, some of the more complex tracking features they had implemented were embodied in Unified Change Management, so they decided to implement UCM. This made a critical difference in the amount of application-level processing required, so check-in and check-out times dropped significantly -- and people stopped complaining.

What? Where? How?

So far I've talked about what to look for when analyzing and tuning IBM Rational ClearCase performance, and I've talked about where to look. In Part II, I'll discuss how to improve Rational ClearCase performance using tools and utilities you probably already have. Stay tuned!

Page 72: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Principles and techniques for analyzing and improving IBM Rational ClearCase performance

Notes

1 The performance of IBM Rational ClearCase, like that of any application, is dependent upon the environment it is in, including the operating system, the hardware it runs on, and other applications running in the same environment. In addition, each organization will have its own tolerances and expectations of performance. Because of this wide range of potential environments and expectations, it is impossible to give hard-and-fast guidelines on what constitutes an acceptable level of performance. If you need assistance in determining whether your Rational ClearCase performance is reasonable for your specific environment and configuration, you may want to contact IBM Rational technical support. It is also beyond the scope of this article to discuss detailed instructions on how to tweak the operating system kernel, NFS (Network File System), Samba, or other low-level technologies.

2 For an excellent and detailed discussion on this topic, see Configuration and Capacity Planning for Solaris Servers by Brian L. Wong (Sun Microsystems Press, 1997).

3 MVFS is a feature of IBM Rational ClearCase that supports dynamic views. Dynamic views use the MVFS to present a selected combination of local and remote files as if they were stored in the native file system. MVFS also performs auditing of clearmake targets and maintains several caches to maximize performance.

4 Unified Change Management is IBM Rational's "best practices" process for managing change from requirements to release. Enabled by IBM Rational ClearCase and IBM Rational ClearQuest, UCM defines a consistent, activity-based process for managing change that teams can apply to their development projects right away.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2003 | Privacy/Legal Information

Page 73: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Modeling for enterprise initiatives with the IBM Rational Unified Process

Part I: RUP and the System of Interconnected Systems Pattern

by Peter Eeles Rational Software, UK

IBM Software Group

and

Maria EricssonRational Software, SwedenIBM Software Group

Photo © 2003 Andrew Lampitt

Developing a system is rarely about developing a single software application. Addressing business problems typically requires a broader perspective that views the system as consisting of other systems. Developing such a system may involve many interdependent projects. This article looks at how the Rational Unified Process,® or RUP,® can be applied in this broader context. Out of the box, RUP focuses on the execution of a single project to develop a single software application, but it is highly suitable for developing more complex systems as well. In particular, we consider RUP as a process framework for typical enterprise initiatives such as:

● Enterprise architecting: defining an architecture1 that underpins a number of systems.

http://bronze.rational.com:8169/content/jul_03/t_modeling_me_pe.jsp (1 of 16) [7/14/2003 10:05:55 AM]

Copyright Rational Software 2003 http://www.therationaledge.com/content/jul_03/t_modeling_me_pe.jsp

Page 74: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Modeling for enterprise initiatives with t...nified Process Part I: RUP and the System of Interconnected Systems Pattern

● Enterprise Application Integration (EAI): developing a solution that includes the integration of a number of legacy systems.

● Packaged application integration: developing a solution that includes the configuration of a packaged application, such as an Enterprise Resource Planning (ERP) or Customer Relationship Management (CRM) solution.

● Strategic reuse: developing reusable assets that are used within a number of systems.

● Systems engineering: developing a system that contains elements of hardware, software, workers, and data.

● Outsourced development: defining an architecture that lends itself to the outsourced development of its constituent parts, while ensuring the quality and integrity of these parts.

Organizations often combine a number of these initiatives because of their business situation or technological factors, and each of these initiatives may represent a "system of systems." In other words, the inherent complexity of the overall system requires development organizations to decompose it into a number of "subsystems" that they implement within a number of projects. Maintaining a consistent relationship between the overall system and its associated subsystems requires careful consideration of a number of areas including:

● Architecting

● Project management

● Requirements management

● Change management

● Testing

This article focuses on what goes into architecting a system of systems using RUP and the System of Interconnected Systems Pattern (discussed below). We wrote it in response to a growing need we see among our customers to understand how all their development efforts tie together. Although we do not provide all the answers, this article is a first step toward understanding how to scale up the process framework provided in RUP for complex enterprise systems.

Note that we do not provide an introduction to RUP in this article; see the References section for suggested reading.

Key Terms

A key term we use in this article is system, and so before proceeding we should be more precise about its meaning, which we base on the following definitions:

A system is a top-level subsystem in a model. A subsystem is a

Page 75: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Modeling for enterprise initiatives with t...nified Process Part I: RUP and the System of Interconnected Systems Pattern

grouping of model elements that represents a behavioral unit in a physical system. A subsystem offers interfaces and has operations. In addition, the model elements of a subsystem can be partitioned into specification and realization elements. [OMG Unified Modeling Language (UML), Version 1.4]

A system is a collection of connected units that are organized to accomplish a specific purpose. A system can be described by one or more models, possibly from different viewpoints. [RUP Version 2002.05 ]

A system provides a set of services that are used by an enterprise to carry out a business purpose. System components typically consist of hardware, software, data, and workers.2 [RUP-SE]

Although these definitions provide different levels of detail, they are remarkably consistent. In particular, we derived from them the following assumptions about the nature of a system:

● A system exhibits behavior.

● A system can contain other elements.

● A system fulfills a specific purpose.

● A system can be composed of hardware, software, data, and workers.

Another key term in this article is pattern, which we use in a general sense to mean a "common solution to a common problem in a known context." Christopher Alexander offers a more detailed definition that fits well with the System of Interconnected Systems Pattern we discuss specifically in this article:

Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice.3

The System of Interconnected Systems Pattern

The System of Interconnected Systems Pattern is an architectural pattern that is used to help control the complexity inherent in a system of systems.4 The pattern identifies a system that represents overall capability and refers to it as the superordinate system. The other systems that represent a part of this overall capability are each referred to as a subordinate system. This division is shown in Figure 1.

Page 76: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Modeling for enterprise initiatives with t...nified Process Part I: RUP and the System of Interconnected Systems Pattern

Figure 1: The System of Interconnected Systems Pattern

The System of Interconnected Systems Pattern is also recursive: A subordinate system may have subsystems of its own and be superordinate in relation to those subsystems. This characteristic is particularly important to initiatives such as enterprise architecting and systems engineering, as we will discuss later.

The System of Interconnected Systems Pattern and RUP

Before discussing the use of the system of interconnected systems pattern within the context of RUP, we should first be clear about the distinction between systems and projects. The System of Interconnected Systems Pattern is primarily concerned with the architectural decomposition of a large and complex system into a number of subsystems, whereas RUP is primarily concerned with the execution of a project. These two facets--architecture and projects--should not be confused.

Although it is often the case that a particular system (superordinate or subordinate) is best implemented as a single project, this article acknowledges that this is a simplification. A single project may implement a number of systems, and a single system may be implemented as a number of projects. On a related note, it is sometimes beneficial to think of a superordinate system as a "living entity" that, to a large extent, is never "finished." As a result, the superordinate system typically undergoes a series of sequential development cycles (i.e., sequential "passes" through the RUP phases of Inception, Elaboration, Construction, and Transition), each of which is executed within a separate project.

How can RUP support development for both levels described in the System of Interconnected Systems Pattern -- the superordinate and the subordinate? We have found that the best way to explain this is in terms of RUP artifacts, which we examine below. Later, we will touch upon related concerns, including a discussion in the Appendix on architectural views.5

Figure 2 shows the relationship between a superordinate system and a number of subordinate systems in terms of key RUP artifacts. It also provides a framework for thinking about specific artifacts of a superordinate project, specific artifacts of a subordinate project, and the relationships among them (we will consider additional relationships in Part

Page 77: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Modeling for enterprise initiatives with t...nified Process Part I: RUP and the System of Interconnected Systems Pattern

II of this series). The artifacts are aligned with the RUP disciplines in which they are produced. A description of each of these artifacts can be found in RUP itself as well as in Philippe Kruchten's book, The Rational Unified Process: An Introduction (see References). Although RUP is typically applied to software development projects, its concepts and best practices (such as requirements management) also apply to nonsoftware projects.

Figure 2: Relationships among RUP artifacts for a superordinate system and subordinate system

(click to enlarge)

Figure 2 shows that, in general, a particular subordinate artifact is constrained along two dimensions:

● By its relationship with artifacts associated with the superordinate system. For example, a subordinate design model is constrained by the superordinate design model.

● By its relationship with artifacts associated with the same subordinate system. For example, a subordinate design model is constrained by the subordinate analysis model that it refines. We will describe each of the relationships shown in Figure 2 in Part II of this series.

It is also worth noting that the superordinate system is concerned primarily with a "broad brush" perspective, concentrating only on elements that are architecturally significant. However, it is the obligation of each subordinate system to provide required details -- each subordinate system effectively "populates" aspects of the superordinate system. Therefore, we can say that the development of a system of systems is both top down (the superordinate system provides a context for each subordinate system) and also bottom up (each subordinate system populates aspects of the superordinate system). RUP for Systems Engineering6 provides a prescription for how to proceed -- in particular, with top-down modeling of use cases (use-case flowdown). The discussion in this article focuses on dependencies from subordinate artifacts to superordinate artifacts. However, in more general terms, the superordinate system is also dependent on each of the subordinate systems that implement it.

Page 78: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Modeling for enterprise initiatives with t...nified Process Part I: RUP and the System of Interconnected Systems Pattern

So what comes first -- the superordinate system or the subordinate system? It is actually a classic "chicken and egg" problem -- some might call it a "design paradox." The whole cannot be defined without understanding the technicalities of the parts, and the parts cannot be defined in detail without understanding the whole. This tells us that they are interdependent, and their development should go hand in hand. There is often a tendency to solve complex problems top down, just because it is easier to break a large problem down into a number of smaller and more manageable problems than to attack the whole beast in one go. However, the risk is that we will forget to consider how the solutions for each of the smaller problems impact the overall solution and get bogged down in building parts that in the end don't fit together. Another common problem is that when we have many "parts" (e.g., when integrating legacy systems) that we'd like to fit together, we must limit the whole, based on what the given parts are able to provide.

Applying the System of Interconnected Systems Pattern to enterprise initiatives

Now that we understand how RUP relates to the System of Interconnected Systems Pattern, we can examine how to apply the Pattern to the initiatives we discussed earlier: enterprise architecting, EAI, packaged application development, strategic reuse, systems engineering, and outsourced development. Note that in each instance, the project teams should decide which of the RUP artifacts we described in the previous sections would be beneficial to the project (we will discuss this later), and then create those artifacts according to the relationships depicted in Figure 2.

Enterprise architecting

Enterprise architecting is concerned with providing a "platform" for developing all systems that comprise an enterprise, and typically has concerns within a number of areas, such as data, functionality, geographic distribution, and people.

We can decompose an enterprise into its respective elements by expressing the enterprise itself as a superordinate system, as we have described it, and the elements of the enterprise (in this context) as subordinate systems. It is especially important to note that the System of Interconnected Systems Pattern is recursive, since enterprise architecting initiatives are often described at a number of levels.

Enterprise application integration

EAI is concerned with developing a solution that includes the integration of a number of legacy systems. Such efforts are, to a large extent, driven from the "bottom up," since elements of the solution already exist. However, there is still some "top down" effort required to understand the context within which such legacy systems will "fit." Also, if the legacy system represents software, often techniques such as "wrapping"

Page 79: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Modeling for enterprise initiatives with t...nified Process Part I: RUP and the System of Interconnected Systems Pattern

interfaces to the legacy software applications are required, together with a good understanding of the available middleware technologies that can be used to interact with such applications.

We can describe the context within which a legacy system fits as a superordinate system, and represent the legacy system itself as a subordinate system.

Packaged application development

Packaged applications are, in effect, customizable frameworks that allow you to build a "family" of applications that support a certain aspect of a business, such as CRM or HR (Human Resources). These frameworks can be considered at two levels.

First, such frameworks often implement a piece of a larger system. In this context, the packaged application (or a piece of the packaged application) represents a subordinate system. Second, such frameworks are often large and complex. Thinking of them as a system of systems in their own right can help us understand them, especially when it comes to understanding how the packaged application will be applied. What pieces will be used as-is? What pieces will be used after modification or configuration? What pieces will not be used at all?

Strategic reuse

There are many dimensions to a strategic reuse initiative, including business concerns (such as return on investment decisions) as well as technical concerns. In this article, we are concerned primarily with the technical aspects of "architecting for reuse." Also, although reusable assets can take many forms (including documents and models), this article focuses primarily on elements of the final system, such as software and reusable hardware.

Reusable assets (whether a simple component or an entire system) do not, by definition, exist in isolation, because they are reused within a number of contexts. A fundamental premise of any strategic reuse initiative, therefore, is to define the services that each asset provides and any services it requires. These assets and their relationships can be described in terms of a system of systems. For example, in his book Software Reuse, Ivar Jacobson considers the concepts of an application system and a component system.7 In simple terms, an application system represents an application that provides value to an end user, whereas a component system represents a set of components that are used by a number of application systems. The system of systems that shows the relationships between the application systems and component systems is described in terms of an overall layered architecture -- the product of application family engineering, which other texts refer to as product line engineering.

Systems engineering

As we said earlier, systems engineering is concerned with hardware,

Page 80: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Modeling for enterprise initiatives with t...nified Process Part I: RUP and the System of Interconnected Systems Pattern

software, workers (people), and data. Two aspects of the process of systems engineering are identifying the elements that comprise the system and understanding the relationships among them. In identifying the elements, we particularly take into account nonfunctional requirements such as performance and cost. For example, we may choose to implement a system element in hardware rather than software for performance reasons, or in terms of workers (people) rather than software for usability reasons (an end user may have a better "user experience" of the system when interacting with a human being!).

The system as a whole can be expressed in terms of a superordinate system, which identifies the elements that comprise the system and the relationships among them. Each of these elements can then be expressed in terms of a subordinate system. Again, it is important to bear in mind that the System of Interconnected Systems Pattern is recursive, since most systems engineering efforts require more than two levels of decomposition. Note that various system boundaries (as well as subsystem boundaries, and potential subsubsystem boundaries) in system engineering actually enclose people, hardware (computational and noncomputational), and software.

See RUP for Systems Engineering8 for more detail.

Outsourced development

When an organization outsources development in order to supplement development capacity or reduce costs, it is vital to maintain architectural integrity between the overall system and its constituent parts. Constraints imposed on the development of the "parts" may vary. For example, you might describe an outsourced part in terms of the requirements it must fulfill, assuming complete flexibility in terms of the solution. Or you might describe an outsourced part in terms of a detailed set of services that it must provide, or even how it will be implemented.

Whatever the scope of the outsourcing, it is advantageous to describe the overall architecture in terms of a superordinate system and the constituent parts in terms of subordinate systems.

When to apply the System of Interconnected Systems Pattern

This section discusses the circumstances -- either business-driven or technology-driven -- under which to apply the System of Interconnected Systems Pattern.

Merging organizations

Often, organizations merge to (among other things) save costs and then struggle when they discover the complexity of this task. There are many difficult decisions to make: what future platform to choose, which systems to keep and which to replace, and how to tackle the business impact of changing systems, to name a few. A merger is a system of systems

Page 81: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Modeling for enterprise initiatives with t...nified Process Part I: RUP and the System of Interconnected Systems Pattern

problem involving people, hardware, and software, and may entail the following kinds of initiatives:

● Enterprise architecting: to get an overall understanding of the problem.

● Enterprise application integration: to integrate existing and new systems.

● Strategic reuse and outsourced development: to develop with efficiency.

Modernizing legacy systems

Organizations often find it necessary to move to new technologies to modernize their systems. Over time, it becomes more and more costly (and time consuming) to add capability to legacy systems, so re-architecting and replacing these systems makes business sense. A typical approach is to gradually replace one or more legacy systems and integrate new ones on a schedule that ensures appropriate management of business or technical risks. Understanding the overall business impact of evolving legacy systems is a system of systems problem involving people, software, and hardware. This type of effort may involve the following initiatives, which can benefit from applying the System of Interconnected Systems Pattern:

● Enterprise architecting: to understand the impact of introducing modern technology.

● Enterprise application integration: to integrate legacy systems with new systems and also legacy systems with other legacy systems.

● Packaged application development: to integrate new technology such as an ERP solution.

Building technically complex products

Building technically complex products, such as telecom network products or air-traffic control systems, has always been considered a system of systems undertaking that involves both hardware and software. As technologies are now changing more rapidly than ever before, there is a stronger need for a thorough treatment of such efforts. It is no longer sufficient to break these systems down into smaller pieces that you resolve independently while handling dependencies along the way. Instead, organizations need a well-defined approach that helps them to proactively handle dependencies and be flexible. Architectural decisions are challenging, and they should not depend merely on peoples' experience. More systematic and efficient methods are required to assess the potential impact (on cost, resources, architecture, and technology) of these decisions.

This type of effort may involve the following initiatives, which benefit from

Page 82: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Modeling for enterprise initiatives with t...nified Process Part I: RUP and the System of Interconnected Systems Pattern

applying the System of Interconnected Systems Pattern:

● Systems engineering: to integrate hardware, software, and process, and define the project overall.

● Outsourced development and strategic reuse: for more efficient development.

Hardware development going soft

Many organizations that have traditionally considered themselves builders of hardware are becoming increasingly dependent on building quality software. Consumer products such as mobile phones, TVs, and cars contain more software than ever before, and designing the hardware and software to work optimally together is a system of systems problem. This type of effort may involve the following initiatives, which are appropriate for the System of Interconnected Systems Pattern:

● Systems engineering: to integrate hardware, software, and process, and define the project overall.

● Strategic reuse: for more efficient development.

Deciding which RUP artifacts to create

The first step in deciding what RUP artifacts to create when you use the System of Interconnected Systems Pattern is to consider whether or not the Pattern should be applied at all. In many instances you may be able to consider a system in its entirety without considering any separate, subordinate systems. The Pattern is most effective when the benefit of managing the system's complexity outweighs the overhead of defining subordinate systems (and producing additional artifacts). For example, if we were to build an air-traffic control system, we might face several complex demands, including the need to 1) develop hardware as well as software; 2) clearly separate the responsibilities of (and boundaries between) system elements so that they can be outsourced; and 3) establish effective communications among geographically distributed teams. The different models of the pattern provide communication mechanisms (interfaces and abstraction) needed to manage team dependencies. In this case, the benefits of treating the system as a system of systems would outweigh the cost of defining a number of separate subordinate systems.

Once you decide to apply the Pattern, then you must identify the artifacts that should be produced for both the superordinate system and subordinate systems (since you don't necessarily need to produce all the artifacts). We will discuss all of the key RUP artifacts (and the relationships among them), but keep in mind that there is a cost involved in creating and maintaining each of them. Therefore, project managers need to pragmatically think through the value that a particular artifact adds, and the cost of creating and maintaining it versus the risk of not creating and maintaining it. Below we list some examples of choices that organizations

Page 83: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Modeling for enterprise initiatives with t...nified Process Part I: RUP and the System of Interconnected Systems Pattern

often make with respect to certain artifacts.

● Business Use-Case Model and Business Object Model. Project teams often do business modeling for a superordinate system, but not always for a subordinate system, especially not if the business model for the superordinate system provides sufficient input for developing the subordinate systems. However, business modeling for a subordinate system may be useful if the superordinate system is very large, and complexity needs to be managed. In particular, project teams may want to do business modeling for a subordinate system if they need a more detailed understanding of a particular aspect of the organization.

● Use-Case Model. It is common for a project team to produce requirements artifacts for both a superordinate system and a subordinate system. The trick is to find the right level of detail at the superordinate level -- enough for those defining the subordinate level requirements, without doing their work for them. In other words, it is desirable to identify and briefly outline superordinate use cases (with an emphasis on defining architecturally significant elements), but not to detail specifications of the flow of events. Such detail will be provided at the subordinate level.

● Analysis Model and Design Model. The analysis and design artifacts are critical for defining architecture. However, just as with requirements artifacts, the emphasis is different for the superordinate system and the subordinate systems. Development of the superordinate system focuses on defining an architecture within which the subordinate systems will be constrained, and also on identifying the subordinate systems. However, this effort may extend only as far as identifying the architecturally significant responsibilities (functional characteristics) and quality attributes (nonfunctional characteristics) of each subordinate system. Then, developing the subordinate system involves adding the detail necessary to ensure that the subordinate system meets these responsibilities and quality attributes, in terms of elements that implement the subordinate system. In some cases, teams create an analysis model only for the superordinate system to assist in architectural decision making at that level, but they do not really need that kind of decision making or model at the subordinate system level.

● Implementation Model. Developing a superordinate system may not require any implementation at all (aside from implementations of the subordinate systems that comprise it). However, organizations may want to undertake some implementation to prove aspects of the superordinate system's architecture, or to ensure that some common components (for example) are available to each subordinate system. The development of subordinate systems typically includes an element of implementation for system delivery (unless the subordinate system is further decomposed).

● Test artifacts. Organizations may undertake some testing to validate aspects of the superordinate system, and they must always test the integration of each subordinate system. Testing during

Page 84: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Modeling for enterprise initiatives with t...nified Process Part I: RUP and the System of Interconnected Systems Pattern

development of a subordinate system is primarily to validate that system's implementation.

Deciding what information should be specified at what level (superordinate or subordinate) can be overwhelming. There is a tendency to treat the superordinate level very lightly and focus the majority of the work at the subordinate level. When this happens, the risk is that the overall architecture of the superordinate system will never become really coherent. Because many decisions are made at the subordinate level, the overall results may not be quite as consistent as they should be. Conversely, there may be a risk that the work at the superordinate level will go into too much detail, and that the detail will need to be redone at the subordinate level.

The challenge is to find the right balance between providing sufficient detail to ensure consistency among subordinate systems, and allowing enough flexibility at the subordinate level (i.e., not imposing artificial constraints). There are no simple rules for how to do this; decisions need to be based on the characteristics of the system of systems being developed. In the Appendix to this article, we discuss additional approaches to depicting a complex system's architecture.

A word about iterative development

There is one particular characteristic of RUP that we do not want to overlook: iterative development. Although this article does not emphasize this particular characteristic, we do acknowledge that taking an iterative approach to applying the pattern is critical to ensuring success. This is true for all of the system elements we have discussed -- software, hardware, data, and workers.

The decision space associated with the initiatives we have discussed in this article is huge. The likelihood of defining a successful architecture up front is extremely small, so the only effective means to converge on a suitable architecture is to design a little, implement a little, test a little, incorporate lessons learned, design a little, implement a little, test a little, and so on. The References below include a number of sources that discuss iterative development in detail, including Philippe Kruchten's book and RUP itself. Particular consideration of iterative development with respect to a system of systems can be found in RUP-SE as well as Ivar Jacobson's book.

Summary

In summary, we can apply the System of Interconnected Systems Pattern, the iterative approach embodied in RUP, and appropriate RUP artifacts, to a diverse set of enterprise initiatives, ranging from enterprise architecting to systems engineering. The System of Interconnected Systems Pattern provides a means of managing complexity within such initiatives and complements the best practices underpinning RUP.

In Part II of this series, we will describe in detail how to apply various RUP disciplines to the development of both a superordinate system and a

Page 85: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Modeling for enterprise initiatives with t...nified Process Part I: RUP and the System of Interconnected Systems Pattern

subordinate system, including a number of artifacts we discussed in Part 1.

Acknowledgments

The authors would like to thank the following people for their help and guidance in writing this paper: Roger Bowser, Dave Brown, Murray Cantor, Kelli Houston, Ivar Jacobson, Paula Simmonds, John Smith, and Dave West, all of IBM Rational, Christina Cooper-Bland of BACS, Jon Pidgeon of Lloyds TSB, Alan Whitfield of UK Inland Revenue, and Gary Willcocks of EDS.

References

The following references were used in preparing this paper.

Christopher Alexander et al., A Pattern Language. Oxford University Press, 1977.

Paul Allen. Realizing e-Business with Components. Addison-Wesley, 2000.

Scott Ambler, The Unified Process Elaboration Phase. R&D Books, 2000.

Colin Atkinson et al., Component-based Product Line Engineering with UML. Addison-Wesley, 2001.

Len Bass, Paul Clements, and Rick Kazman, Software Architecture in Practice. Addison-Wesley, 2003.

Jan Bosch, Design and Use of Software Architectures: Adopting and Evolving a Product-Line Approach. Addison-Wesley, 2000.

Murray Cantor, "Rational Unified Process for Systems Engineering." IBM Rational whitepaper, available at http://www.rational.com/products/whitepapers/wprupsedeployment.jsp

John Cheesman and John Daniels. UML Components--A Simple Process for Specifying Component-Based Software. Addison-Wesley, 2000.

Paul Clements et al. Documenting Software Architectures--Views and Beyond. Addison-Wesley, 2002.

Maria Ericsson. "Developing Large-scale Systems with the Rational Unified Process." Rational whitepaper available at http://www.rational.com/products/whitepapers/sis.jsp.

Hans-Erik Eriksson and Magnus Penker. Business Modeling with UML--Business Patterns at Work. John Wiley & Sons, 2000.

Michel Ezran, Maurizio Morisio, and Colin Tully. Practical Software Reuse. Springer, 2002.

Page 86: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Modeling for enterprise initiatives with t...nified Process Part I: RUP and the System of Interconnected Systems Pattern

Peter Herzum and Oliver Sims. Business Component Factory. John Wiley & Sons, 1999.

Christine Hofmeister, Robert Nord, and Dilip Soni. Applied Software Architecture. Addison-Wesley, 1999.

IEEE-Std-1471-2000. Recommended Practice for Architectural description of Software Intensive Systems. Available at http://standards.ieee.org/catalog/olis/se.html.

Ivar Jacobson, Martin Griss, and Patrik Jonsson. Software Reuse. Addison-Wesley, 1997.

Ivar Jacobson, Maria Ericsson, and Agneta Jacobson. The Object Advantage--Business Process Reengineering with Object Technology. Addison-Wesley, 1994.

Philippe Kruchten. The Rational Unified Process--An Introduction. Addison-Wesley, 2000.

Janis Putman. Architecting with RM-ODP. Prentice Hall, 2000.

Rational Unified Process, version 2002.05. IBM Rational Software.

RUP for Systems Engineering. IBM Rational Software. Available from IBM Rational Developer Network (http://www.rational.net; authorization required).

Clemens Szyperski. Component Software--Beyond Object-Oriented Programming. Addison-Wesley, 2002.

Dave West, Kurt Bittner, and Eddie Glen. "Ingredients for Building Effective Enterprise Architectures." The Rational Edge, November 2002. Available at http://www.therationaledge.com/content/nov_02/f_enterpriseArchitecture_dw.jsp.

John Zachman. "A Framework for Information Systems Architecture." IBM Systems Journal, Vol.26, No.3, 1987.

Appendix: Architectural representation

Although this article focuses on models for describing different aspects of a system, it is common practice to also define an architectural representation of a system that omits elements not deemed to be architecturally significant. This representation is often expressed in the form of "architectural views," with each view providing a particular perspective of a subset of one or more models.

Architects can choose among a number of standard architectural representations, depending on the nature of the system they are describing. Examples include the following:

Page 87: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Modeling for enterprise initiatives with t...nified Process Part I: RUP and the System of Interconnected Systems Pattern

The 4 + 1 Views of Software Architecture, defined by Philippe Kruchten, is the architectural representation advocated in RUP.9

The C4ISR (Command, Control, Computers, Communication, Intelligence, Surveillance, and Reconnaissance) Architecture Framework, defined by the U.S. Department of Defense (DoD), is the standard used in military domains.10

The IEEE Recommended Practice for Architectural Description of Software-Intensive Systems (ANSI/IEEE-1471-2000) standard provides a conceptual framework for architectural description and defines what is meant by a 1471-compliant architectural description.

The Reference Model for Open Distributed Processing (RM-ODP) is an ISO standard.11

The Zachman Framework, defined by John Zachman, is most often associated with enterprise architecting.12

The RUP-SE model framework introduced in the Systems Engineering Plug-In to the IBM Rational Unified Process is available from IBM Rational Developer Network.13

Notes

1 In this article, the term "architecture" has a very broad meaning that encompasses software architecture, hardware architecture, organizational structures, and so on.

2Derived from Systems Engineering and Analysis (Third Edition), Blanchard and Fabrycky, Prentice Hall, 1998.

3 Christopher Alexander et al. A Pattern Language. Oxford University Press, 1977.

4 The system of interconnected systems pattern is documented in more detail in a number of publications, including the book by Ivar Jacobson et al., Software Reuse: Architecture, Process and Organization for Business Success (Addison-Wesley, 1997) and an IBM Rational whitepaper by Maria Ericsson (http://www.rational.com/products/whitepapers/sis.jsp), "Developing Large-scale Systems with the Rational Unified Process." The latter presents principles consistent with the approach described in Murray Cantor's IBM Rational whitepaper, "Rational Unified Process for Systems Engineering," (http://www.rational.com/products/whitepapers/wprupsedeployment.jsp) and applied in the RUP-SE Extension to RUP, available through Rational Developer Network (www.rational.net; authorization required). Jacobson et al. use the pattern primarily for defining strategic reuse initiatives. Ericsson describes, at a high level, how to use the pattern within the context of RUP. This article describes this alignment of the pattern with RUP in more detail.

5For more detailed process discussion regarding specific initiatives, See Cantor, Op.Cit. on systems engineering and Jacobson et al., Op.Cit. on strategic reuse.

6Available through Rational Developer Network: www.rational.net; authorization required.

7Ivar Jacobson, Op.Cit.

Page 88: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- Modeling for enterprise initiatives with t...nified Process Part I: RUP and the System of Interconnected Systems Pattern

8RUP-SE Extension to RUP, Op.Cit.

9For more information, see Kruchten, Op. Cit. and RUP.

10 For background and a rationale, see Dave West, Kurt Bittner, and Eddie Glen, "Ingredients for Building Effective Enterprise Architectures." The Rational Edge, November 2002. Available at http://www.therationaledge.com/content/nov_02/f_enterpriseArchitecture_dw.jsp.

11See Janis Putman. Architecting with RM-ODP. Prentice Hall. 2000.

12 For more information, see John Zachman, "A Framework for Information Systems Architecture." IBM Systems Journal, Vol. 26, No.3.

13 Rational Developer Network is at www.rational.net (authorization required). Also see Murray Cantor, Op.Cit.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2003 | Privacy/Legal Information

Page 89: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The subsystem: A curious creature

by Bran Selic Rational Software Canada

IBM Software Group

An Ode to the Polysubsystemibus

In Yuml -- a land of mysterious creatures,

lives the fuzzy polysubsystemibus, an amalgam of features.

Not quite an object, but an abstraction;

those who'd explain itare driven to distraction.

An animal, a mineral, a circle, and a square;

perhaps its deep secret is: nothing is there.

In his article "The What, Why and How of a Subsystem," which appeared in last month's issue of The Rational Edge, Fredrik Ferm points out that many people do not fully understand what a UML subsystem really is. This is borne out by my own experience in numerous conversations with UML users. Why is this so? Why does a concept that seems to be so natural and intuitive cause so much confusion? After all, no one really needs to be convinced that the general concept is useful. Decomposing a complex system into a set of less complex subsystems is a sound approach for practically any complex engineering system -- software or otherwise.

One reason for the confusion may be the very pervasiveness of the term. That is, subsystems have many different concrete forms, depending on the

Copyright Rational Software 2003 http://www.therationaledge.com/content/jul_03/k_subsystem_bs.jsp

Page 90: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- The subsystem: A curious creature

specific application domain or system. Upon hearing of the concept in UML, people often intuitively interpret it according to semantics that stem from their personal experience. This is often the starting point for the misunderstanding. However, there is more to it.

Based on concepts from different contexts

The biggest cause of confusion stems from the unorthodox combination of base concepts that the UML1.4 metamodel uses to define subsystems. In this model, the subsystem concept is defined as a direct descendent of two quite different concepts: classifiers and packages. This is shown in Figure 1.

Figure 1: UML metamodel for subsystems

Let's examine the semantics of these base concepts.

A classifier in UML represents a specification (descriptor) for a set of runtime entities that exhibit the same kinds of structural and behavioral features. The concept of a specification is an everyday one that we all understand: house blueprints, system requirements documents, classes in object-oriented languages, assembly instructions, and so on, are all examples of specifications. These specifications serve as models of the runtime entities that are ultimately realized by some program.

A package, in contrast, is a container for holding various elements of a UML model, including classifiers. It is directly analogous to a UNIX directory or Windows folder. A package does not represent or specify anything, nor does it model any kind of runtime entity; it is merely a mechanism for conveniently grouping parts of a complex model. The criteria for grouping model elements into packages are not prescribed by UML. In general, the model elements that are grouped into a package do not necessarily have to correspond to a grouping of corresponding runtime entities. For instance, we may decide to package together all the model elements done by a particular developer, or all elements that were done on a particular day of the week.

So, what does a "cross" between these two radically different kinds of things represent? What do we get when we cross a shoebox with the instruction sheet for assembling a piece of Ikea furniture?

An old children's riddle reveals the difficulties inherent in mixing these concepts:

Page 91: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- The subsystem: A curious creature

Question: "What do you get when you cross a color with a fruit?"

Answer: "An orange."

Whether or not you think this is a clever answer (depends on your age), it is easy to see through the trickery here. Yes, an orange is a fruit, and, yes, the same English word denotes a color, but the two notions exist in different and unrelated domains of discourse. So, what is an "orange" then? Either a color or a fruit, depending on the context (domain of discourse). The multiple inheritance that defines the UML subsystem concept presents a similar problem.

● In the design repository where the model is kept, the term "subsystem" denotes a package containing all the various model elements (specifications, contained packages, etc.) related to a particular runtime subsystem.

● In the runtime environment, on the other hand, the same term represents the runtime incarnation of the subsystem, which is specified by the various model elements contained in the corresponding package. That is, in this specific case, there is a tight correspondence between the runtime grouping and the grouping of model elements.

By using the same term for these two different things, we raise the concept to a higher level of abstraction -- a level that ignores the difference between elements in the design environment and their runtime manifestations. This leaves the subsystem in a curious "betwixt and between" position, which I call the polysubsystemibus effect.

Note that the phenomenon of ignoring distinctions between domains is not all that unusual. In everyday speech, for example, we often substitute one dissimilar thing for another as a matter of convenience. For instance, when showing someone a photograph of a person called Sam, we might say "this is Sam," rather than the more precise "this is a photograph of Sam." In most cases, this will not cause misunderstandings, since we are attuned to the common verbal shortcuts within our own cultural context.

If we now turn back to the diagram in Figure 1, we can see that the effect is similar for the multiple inheritance used to derive the subsystem concept. This is a somewhat unconventional interpretation of multiple inheritance: It implies that a UML subsystem is either a package or a classifier, but not both. The specific meaning depends on the context.

A further complication is that the UML definition of a subsystem includes idiosyncratic refinements. For example, although they are classifiers, UML subsystems cannot have their own associated behavior (this restriction does not hold in general for UML classifiers). This means that the behavior of a subsystem is exclusively the result of the behavior of its internal parts. Similarly, the behavioral and structural features of a classifier must be realized by its parts rather than by the subsystem "object" itself. In

Page 92: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

The Rational Edge -- July 2003 -- The subsystem: A curious creature

fact, if the subsystem is marked "not instantiable" there cannot even be a subsystem object. These rather unusual and ill-justified restrictions (there is no rationale provided for them in the UML specification) add to the general confusion surrounding subsystems.

An alternative metamodel for subsystems

Overall, I suspect that there would be much less confusion if the subsystem concept were defined as shown in Figure 2. In this model, the subsystem has an associated package to group the various repository-based specifications associated with it. This definition maintains a clear separation between the two contexts and avoids the confusing polysubsystemibus effect.

Figure 2: Alternate metamodel for subsystems

I think it is unfortunate that the UML does not define subsystems in this way -- or a similar one. But until it does, the subsystem will remain an intriguing and mysterious creature that seemingly dwells in two places at once -- and we will continue to need articles like this one to hunt it down.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2003 | Privacy/Legal Information

Page 93: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 1

XDE Tester v2003 Overview Presenation

Testing Java and Web Applications with IBM Rational XDE Tester

Page 94: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 2

XDE Tester v2003 Overview Presenation

Agenda

Page 95: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 3

XDE Tester v2003 Overview Presenation

The Importance of Complete Testing: Hidden Bugs

Page 96: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 4

XDE Tester v2003 Overview Presenation

The Importance of Complete Testing: Hidden Bugs

Page 97: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 5

XDE Tester v2003 Overview Presenation

The Importance of Complete Testing: Hidden Bugs

Page 98: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 6

XDE Tester v2003 Overview Presenation

The Challenge of Manual Testing with Short Test Cycles

Page 99: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 7

XDE Tester v2003 Overview Presenation

Agenda

Page 100: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 8

XDE Tester v2003 Overview Presenation

Test Automation With XDE Tester

Page 101: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 9

XDE Tester v2003 Overview Presenation

Powerful Script Development Environment

Page 102: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 10

XDE Tester v2003 Overview Presenation

Integration into the XDE/WSAD/Eclipse 2 Shell

Page 103: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 11

XDE Tester v2003 Overview Presenation

Extensible Development

Page 104: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 12

XDE Tester v2003 Overview Presenation

Addressing Script Maintenance

Page 105: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 13

XDE Tester v2003 Overview Presenation

ScriptAssure Ensures Resilient Scripts

Page 106: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 14

XDE Tester v2003 Overview Presenation

ScriptAssure: Lowers Script Maintenance

Page 107: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 15

XDE Tester v2003 Overview Presenation

ScriptAssure: Dynamic Data Testing

Page 108: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 16

XDE Tester v2003 Overview Presenation

ScriptAssure: Dynamic Data Testing

Page 109: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 17

XDE Tester v2003 Overview Presenation

Java Language Enables Powerful Test Scripting

Page 110: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 18

XDE Tester v2003 Overview Presenation

Java language Enables Powerful Test Scripting

Page 111: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 19

XDE Tester v2003 Overview Presenation

Leveraging existing Java Assets

Page 112: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 20

XDE Tester v2003 Overview Presenation

Agenda

Page 113: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 21

XDE Tester v2003 Overview Presenation

Moving Forward

Page 114: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 22

XDE Tester v2003 Overview Presenation

The Evaluation Process

Page 115: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 23

XDE Tester v2003 Overview Presenation

The Evaluation Process

Page 116: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 24

XDE Tester v2003 Overview Presenation

Agenda

Page 117: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 25

XDE Tester v2003 Overview Presenation

Test Automation With XDE Tester

Page 118: 2003 1:47:03 PM] - IBM · projects are helping businesses survive tough economic times. Very successful projects even allow companies to differentiate themselves competitively. So,

Page: 26

XDE Tester v2003 Overview Presenation

Testing Java and Web Applications with IBM Rational XDE Tester