panel session on component software

4
EISEVIER Nuclear Instruments and Methods in Physics Research A 389 (1997) 22-25 NUCLEAR INSTRUMENTS 8 METHODS IN PHYSICS RESEARCH SectIon A Panel session on component software Nigel Bake?*, Jean-Marie LeGoff b* I, Ian WillersbT2 a Facul@ of Computer Studies and Maths, Universiq of the West of England, Bristol. BS16 IQX UK bECP Division, CERN Geneva, I21 I Switzerland Abstract Component software and distributed object architectures are receiving considerable attention at the moment. With HEP experiments increasing in size and complexity shouldn’t the physics community look to adopt this technology? But what is component software, what are the potential benefits and disadvantages? Can quality components be bought off the shelf? Which components models are best? Is the technology mature enough? Perhaps physicists should be actively involved in the standards making process to produce common facility components for HEP experiments and acceler- ators? This panel session aims to answer some of these questions and provide feedback of experiences gained in using this technology. 1. Introduction Software development has always been an intrinsically difficult and expensive process. It has been estimated that the difference between a good programmer and bad can be as much as 100 times in productivity terms. However, with the lowering cost of CPUs and communication bandwidth and with the attraction of increased perfor- mance through concurrency and parallelism the motiva- tion has been to design larger distributed systems. As a result software complexity has increased leaving devel- opers searching for mechanisms to control this complex- ity explosion. The object-oriented programming para- digm was heralded as a solution to manage complexity but implementation inheritance used in most languages is almost incompatible with distribution. However, the fusion of object technology and distribution has produc- ed several competing distributed object architectures which encourage the use of interoperable and inter- changeable, reusable software components. The issue of using reusable software components and the state of this technology is receiving considerable attention. The panel session aimed to promulgate the benefits and disadvan- tages of component software by discussing its main char- acteristics, leading standards and the state of current technology. The panelists have long-standing experience *Corresponding author. Tel.: 44 1179 656 261. e-mail: [email protected]. 1 E-mail: [email protected]. ’ E-mail: [email protected]. in distributed object-based systems and the use of distrib- uted object-based toolkits in the development of distrib- uted control systems. The panel session started with a tutorial style overview of the main issues in building distributed systems and a snapshot of the current state of distributed object architectures and component stan- dards. In the second part of the session an account was given of experience gained in designing and building large distributed applications using component techno- logy. The session was then opened up to the floor and a general discussion took place which covered the major issues and characteristics of component technology. This report attempts to give an impartial summary of the views and conclusions expressed at the session. 2. About component software (N. Baker, University of the West of England) 2.1. What is component software? A component or application element is a reusable piece of software that can be plugged into other compo- nents so that it can inter-operate even if it was written in a different programming language by a different vendor, compiled at a different time, and runs on a different operating system and hardware platform. Systems can then be assembled out of tested, reusable, and indepen- dently upgradable components. Observers argue that this approach to software development will radically change the software industry. An infrastructure is re- quired to support this “plug and play” paradigm and 016%9002/97/$17.00 Copyright ,CI 1997 Elsevier Science B.V. All rights reserved PII SO168-9002(97)00033-S

Upload: nigel-baker

Post on 02-Jul-2016

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Panel session on component software

EISEVIER

Nuclear Instruments and Methods in Physics Research A 389 (1997) 22-25 NUCLEAR

INSTRUMENTS 8 METHODS IN PHYSICS RESEARCH

SectIon A

Panel session on component software

Nigel Bake?*, Jean-Marie LeGoff b* I, Ian WillersbT2

a Facul@ of Computer Studies and Maths, Universiq of the West of England, Bristol. BS16 IQX UK bECP Division, CERN Geneva, I21 I Switzerland

Abstract Component software and distributed object architectures are receiving considerable attention at the moment. With

HEP experiments increasing in size and complexity shouldn’t the physics community look to adopt this technology? But what is component software, what are the potential benefits and disadvantages? Can quality components be bought off

the shelf? Which components models are best? Is the technology mature enough? Perhaps physicists should be actively involved in the standards making process to produce common facility components for HEP experiments and acceler- ators? This panel session aims to answer some of these questions and provide feedback of experiences gained in using this

technology.

1. Introduction

Software development has always been an intrinsically difficult and expensive process. It has been estimated that the difference between a good programmer and bad can be as much as 100 times in productivity terms. However, with the lowering cost of CPUs and communication bandwidth and with the attraction of increased perfor- mance through concurrency and parallelism the motiva- tion has been to design larger distributed systems. As a result software complexity has increased leaving devel- opers searching for mechanisms to control this complex- ity explosion. The object-oriented programming para- digm was heralded as a solution to manage complexity but implementation inheritance used in most languages is almost incompatible with distribution. However, the fusion of object technology and distribution has produc- ed several competing distributed object architectures which encourage the use of interoperable and inter- changeable, reusable software components. The issue of using reusable software components and the state of this technology is receiving considerable attention. The panel session aimed to promulgate the benefits and disadvan- tages of component software by discussing its main char- acteristics, leading standards and the state of current technology. The panelists have long-standing experience

*Corresponding author. Tel.: 44 1179 656 261. e-mail: [email protected].

1 E-mail: [email protected]. ’ E-mail: [email protected].

in distributed object-based systems and the use of distrib- uted object-based toolkits in the development of distrib- uted control systems. The panel session started with a tutorial style overview of the main issues in building distributed systems and a snapshot of the current state of distributed object architectures and component stan- dards. In the second part of the session an account was given of experience gained in designing and building large distributed applications using component techno-

logy. The session was then opened up to the floor and a general discussion took place which covered the major issues and characteristics of component technology. This report attempts to give an impartial summary of the views and conclusions expressed at the session.

2. About component software (N. Baker, University of the West of England)

2.1. What is component software?

A component or application element is a reusable piece of software that can be plugged into other compo- nents so that it can inter-operate even if it was written in a different programming language by a different vendor, compiled at a different time, and runs on a different operating system and hardware platform. Systems can then be assembled out of tested, reusable, and indepen- dently upgradable components. Observers argue that this approach to software development will radically change the software industry. An infrastructure is re- quired to support this “plug and play” paradigm and

016%9002/97/$17.00 Copyright ,CI 1997 Elsevier Science B.V. All rights reserved

PII SO168-9002(97)00033-S

Page 2: Panel session on component software

N. Baker et al. i Nucl. Instr. and Meth. in Phw. Res. A 3X9 (I 997) ,72- -75 ‘3

further it is expected that this architecture will be distrib-

uted and allow the system to grow as components are added.

2.2. Whv distributed components?

It is agreed that distributed systems are complex so

why build them? The motivations are that organisations, people and information are distributed. Utilisation of many smaller machines (servers) has the potential benefit of increased performance to cost ratio. Systems can

grow in small (server or component) increments over a large range of sizes. Components should be designed to fail independently, therefore increasing reliability and availability. A final motivation is that hardware and

software components can be selected from multiple vendors. The disadvantages are: the management of the complexity, coping with partial failure and security, that distribution inevitably introduces. An ideal component infrastructure or architecture should then be inter-oper-

able between different hardware and software compo-

nents, be scalable so that the system can grow over a large geographical extent without degradation of per- formance, be coherent so that it behaves as a single system, be available. provide application building tools and provide a fundamental set of managed services.

2.3. What are components plugged into:)

All system components including applications must adopt a global management approach with a common interface to management tools. Examples of core funda- mental services are: component location and access ser-

vice (naming. replication, migration, life cycle), commun- ication service (synchronisation, transaction, multi-cast concurrency). security service (authentication, access control), availability service (replication, fault tolerance), time service (a notion of global time), and management

services (event handling, monitoring). The key to build- ing open distributed systems out of components is to have an architecture in which the fundamental services are part of the model and appear transparent to the application builder. In addition. these services must be specified and implemented to a well-known standard. It is most important that as more components are added to the system they can then be managed in a common way. So. for example, all components must handle events in the same way so that when a new application defined component is plugged into the system it cannot only be located and accessed by all other objects but it can also join in the common event handling scheme. In a similar way components must be able to plug into the same security service, the same transaction service. the same global time service, etc. If components handle events or security in their own user defined way then they will not be able to plug into these common services hence reduc-

ing the interoperability consistency and coherence of the system. Therefore, it is crucial that these services are part

of the architecture and also as transparent to the applica-

tion component builder as possible in order to hide the complexity.

2.4. Whj, an object-based approrrch?

All component interfaces must be well defined and provide a contract between the service provider and the client. The object is the ideal abstraction on which to build this model. Object interfaces allow the decoupling of components. hide heterogeneity and allow the use of

multiple implementation languages. The fundamental services can be organised as a hierarchy and hidden behind the invocation call to the target object. So if a target server object is classed as being secure, transaction and replicated, then the invocation by a client will lead to a secure. transactional, replicated service. However, all the client does is a local method call to pass the para- meters and receive the reply.

2.5. Component startdards

There have been a number of groups working towards this goal but the major architectures and standards are Microsoft’s OLE 2.0-COM, the Object Management Group (OMG) CORBA Architecture and the Interna- tional Standards Organisation IS0 10746 Open Distrib- uted Processing Reference Model.

2.0. The OMG architecture

To quote the OMG their mission is to “develop a single architecture using object technology for distrib- uted application integration guaranteeing reusability of components, interoperability and portability with a basis in commercially available software”. The focus is to “de- velop easily usable off-the-shelf component standards”. 1 t is important to understand that the OMG architecture is still evolving and is a specification of services and interfa- ces. Various vendors supply toolkits and components which conform to the specifications. The common object request broker (CORBA) specifies the location and ac- cess service by which distributed objects invoke and receive responses. OMG IDL is an object-oriented inter- face definition language which is used to specify object

methods and attributes. It is designed to map onto mul- tiple programming languages such as C + + . Ada, Small-talk and Cobol. CORBAservices specifies a set of fundamental services which as explained are mandatory to supporting a plug and play style of component interac- tion. All of the key services mentioned above are included in the architecture although some are yet to be fully specified. The role of CORBAfacilities however is to provide common higher-level services which. although

11. SOFTWARE ENGINEERING

Page 3: Panel session on component software

24 N. Baker et al. / Nucl. Instr. and Meth. in Ph_vs. Res. A 389 (1997) 22-25

may not be essential to all applications, may well be required by most. Specification is underway for four major sets of facilities: User-interface facilities for com-

pound documents, scripting and desktop management; information management facilities; and task manage- ment facilities to support work flow, agents and rule management. Applications make use of CORBAservices and CORBAfacilities by interface inheritance. A Domain Technology Committee was set up at the end of last year

so that groups of companies can help in specifying com- ponents that will support various markets such as health care, manufacturing, telecommunications and finance.

2.7. Micros& OLE 2.OICOM

Object linking and embedding (OLE 1.0) began as an architecture to support component-based documents or

compound documents. In contrast to the OMG Archi- tecture Microsoft focused on interoperability between

desktop software and applications. the primary docu- ment displayed on the desktop is made up of a number of software components. Visually at the user interface these components will appear as text, charts, tables. images and graphs contained within the primary document. However, the data contained within these components is managed by other applications (databases. spreadsheets, etc.). The interaction at the user interface is transformed

so that, for example, a cut and paste operation between two graphical objects will involve data exchange between two different applications and all the data presentation and display problems that it entails. The interoperability between applications that share compound document data is supported by an object architecture. The key parts to the architecture include a data exchange model

(DDE), structured storage to manage the data as a single entity, automation to automate use manipulation of compound documents and an underlying object model. It was OLE 2.0 which provided the object model (COM) and improved the interoperability between components. The common object model or component object model (COM) provides similar basis services as in CORBA and CORBAservices such as transparent location and access between objects. Object interfaces can be defined using

COM IDL and specify the interaction allowed between application components. As in CORBA the objects may

be implemented in a variety of languages. Reuse is achieved through delegation and aggregation rather than inheritance and objects can support multiple interfaces.

3. Experience in building applications using component software technology (Jean-Marie LeGoff, CERN)

The main purpose of using component technology is to promote modular software production and independent component development, to adopt a plug and play ap-

preach and to allow scalability and to enhance long-term maintenance. Independent software component develop- ment is an ideal match for constructing large systems like HEP experiments where the responsibility for different

parts is scattered across European or even worldwide institutes. Because of limited resources, software parts are quite often engineered remotely and then brought to a central location where they are compiled and linked together. The success of the software build depends on

the specification of the software interfaces. With different implementation languages and operating systems the chances of incompatibility are high. A common typed interface definition language which generates language- dependent templates, as found in component architec- tures, improves system development and build consi- derably. Even then integration is limited for without

common ways of dealing with events, for example, then components cannot plug into the common monitoring system. Also the systems management is poor without a good monitoring system.

Similarly, without a consistent notion of global time

amongst distributed components, it is impossible to rea- son about what happened before this or that event. The same applies to security. Components must be able to plug in and inter-operate according to a predefined com- mon security policy. A lot of these common services are complex and our experiences suggest well beyond the capabilities of the average programmer but are crucial to building robust, secure, consistent distributed systems. These all motivate the need for a component approach to these ever larger systems where components can be pre- tested for fitness of purpose before integration. New systems invariably attempt to reuse proven legacy soft-

ware that have operated successfully in older systems and that are well understood. Integration and reuse of this

software within a new system always present a problem. With the object-based approach legacy software can be encapsulated and an interface to it defined using IDL. This legacy component can then plug into the common management services in the same way as newly defined components, the only disadvantage might be depending on size, the large granularity of this object. Although

component architectures are still evolving, the existing toolkits have demonstrated support for distributed de- velopment. software evolution and system scalability. Component technology does help manage complexity but as systems are increasing in complexity, software development is still a difficult task and requires a soft- ware team of experts to handle component management issues.

4. Discussion and conclusions

This part of the session was very animated with many aspects and issues of component technology being

Page 4: Panel session on component software

N. Baker et al. I Nucl. Instr. and M&h. in Pll!s. Rrs. .4 389 11997) 22L-75 25

discussed. The following attempts to present a summary

of the various conflicting views and conclusions of those discussions.

There was a feeling that the physics world should push for a firm basis on which to build their applications. Areas that were important are databases, object ex- change, interfaces, runtime systems etc. Without these in place there is not point in building applications. This observation is independent of the component issue.

CORBA has been used by physics groups to try out these new ideas. The reputation gained would indicate

that it was slow and complex to use. Those experienced with CORBA pointed out that project development had proceeded with very earfy toolkits way before the OMG architecture had unfolded. A lot of the complexity had

come about because the software developers were design- ing their own “plug and play” infrastructure. More of the CORBA architecture is now commercially available and the specification of the architecture nearing completion. Just as telecommunications. manufacturing and health- care are specifying domain-specific components within

the OMG, perhaps the HEP community should consider

doing the same. Java is a new technology which perhaps could displace

both OLECOM and CORBA and at the moment is

certainly cheaper and more available. The idea of passing scripts which can be executed at a client site is very

attractive. It has been pointed out that Java is essentially a programming language and was not intended to pro- vide an infrastructure to support common management service. However, a management API set of classes is now being defined along with other new class hierarchies specific to particular application domains. One of the problems is that because of security, a Java applet can

only request services from the machine on which it was obtained thus making distributed computing awkward.

There are already CORBA IDL to Java compilers avail- able so that Java can be used as an implementation language to build components that will plug into the OMG architecture.

The physics community is not ready for yet another

new software idea. They were having enough trouble tackling C + + and object orientation.

The general conclusion was that the component archi- tectures were still evolving and that any one of these technologies could fall from grace. However, the physics

community should carefully monitor the progress of these technologies and should be in a position to adopt component technology as and when the success stories

appear.

II. SOFTWARE ENGINEERING