computing professional - acs.org.au · has been necessary to take extreme care in ... ed for...

32
PROFESSIONAL COMPUTING No. 81 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY NOVEMBER 1992

Upload: dohanh

Post on 13-Jul-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

PROFESSIONAL

COMPUTINGNo. 81 THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY NOVEMBER 1992

with a caseDid you know that you could win $3200 for the submission of a case study — and you thought that CASE had to do with software engineering, not with

making money.See details below

CONDITIONS GOVERNING THE AWARD OF THE ACS CASE STUDY PRIZE

The conditions of this case study prize are asfollows:1. The ACS Case Study Prize will be awarded for

the best submitted case study of an industrial, commercial or administrative computer application implemented in 1992.

2. Competition for this prize is open to individual members of the Society, and to companies or organisations which are Corresponding Institutions of the Society or Corporate members of the Society (including departments within companies).

3. In the case of individuals, a cash prize of $3200 for the 1992 case study will be awarded to the winning entry. In the case of companies, an engraved plaque will be awarded to the winning entry.

4. If in any year no case study of sufficient standard to merit the award of prizes is submitted, the Council of the ACS may suspend the award in that year.

5. If the prize is awarded for a case study

submitted in the names of more than one author, the prize is to be divided equally among the authors.

6. Case studies should be submitted in triplicate, so as to reach the Chief Executive Officer of the Australian Computer Society, at EO. Box 319, Darlinghurst NSW 2010 by March 31, 1993.

7. A case study submitted for the A.C.S. Case Study Prize should be concerned with a computer application, and should cover preliminary studies (including a description of the environment of the application), system design, equipment specification, selection and installation (if relevant), and subsequent implementation. It should conclude with a critical summary of possible improvements in the light of operating experience.

8. The case study may be published in a National ACS publication and should therefore be submitted in a form suitable for publication.

Enquiries may be made through the National Office 02 211 5855.

PRESIDENT'S MESSAGE

State of the society

IT was a pleasant task to accept an invita­tion to attend the recent SA Branch confer­ence held in Victor Harbour SA. The invi­tation was on the condition that I bring the

attendees up to date with a report on the state of the society.

MembershipThis is an important indicator of the health

of the society and I am pleased to report that to the end of September net membership had grown by seven per cent. It is expected that by the end of the year a figure of nine per cent will be achieved, similar to last year.

It is of some concern that a sizeable group of people have failed to renew membership, be­cause with these people retained as members we would achieve significant growth. The highest group of non-renewers were provision­al associates and affiliates, who may have tak­en out membership to achieve professional development discounts.

BranchesBranches are the front line of the society.

This year I have been able to visit WA, NT, NSW, VIC, QLD, SA and the Townsville chapter. The chapter recently celebrated 25 years of service to local members — a remark­able achievement. Most branches reflect the fact that they are undergoing major re-adjust­ment or re-alignment. Some have already achieved this. It is no secret that some of our longer-standing office-bearers and volunteers had reached a point where they found it diffi­cult to come up with new ideas. Many have turned over responsibilities to newer, perhaps younger members who are giving branches a new lease of life. Membership growth shows they are already getting the support.

Technical BoardsAs reported in a previous issue the Techni­

cal area has also been the subject of a re­focussing. Three boards now span the area and directors can now concentrate on a clearer set of objectives. Great things are expected.

SeminarsIn the current economic climate it is ex­

tremely difficult to attract attendance to semi­nars. At both the branch and National level it has been necessary to take extreme care in planning and presenting professional develop­ment. Large-scale conferences are considered risks at present and focus has been on a larger number of smaller, focussed activities. Mem­bers continue to get great value for money in PD.

Corporate RecruitmentFollowing a decision by Council to elimi­

nate non-professional members myself, the CEO and many Branch Chairs have met with large organisations where opportunities exist­ed for conversion of staff to ACS membership. Many organisations now have a much better

Geoff Dober FACS PCP

understanding of the ACS, and the CEO is working hard to sign up new members.

Industry and Government RelationsDiscussions have continued with bodies like

the AIIA, and the IEAust. We believe that there are many opportunities for complimen­tary activities and roles and a good working relationship is being developed.

I compliment the SA Branch who achieved something which shouldn’t be so remarkable but probably was — the Premier and Deputy Leader of the Opposition opened their Branch conference. SA is trying to become the IT State in Australia, and it is clear that the gov­ernment and the ACS both recognise the im­portance of the task and are starting to work closely. Several other ACS branches are work­ing closely with state governments and I en­courage other branches to get involved in the important developments of their states.

Unemployed MembersUnfortunately too many members are still

in this category. I thank the people who re­sponded from SE Asia with advice or sugges­tions regarding employment in the region as a result of my letter to them. Peter Khoo, a member from Singapore actually visited Mel­bourne and made himself available to a group of unemployed members for discussions. Thanks, Peter.

SummaryI hope you can appreciate the positive as­

pects of the ACS. Most of our indicators are pointing in the right direction and, with the continuing support of members and office holders we can really achieve significant networking, professional development and make a professional difference.

This is the time of the year when most branches hold elections and to those people who will not hold office next year for any reason, I sincerely thank them for their contri­bution, and challenge the new team to do their best.

PROFESSIONAL COMPUTING, NOVEMBER 1992 1

THE AUSTRALIAN COMPUTER SOCIETY

Office bearersPresident: Geoff Dober.Vice-presidents: Garry Trinder, Bob Tisdall. Immediate past president: Alan Underwood. National treasurer: Glen Heinrich.Chief executive officer: Ashley Goldsworthy.

PO Box 319, Darlinghurst NSW 2010. Telephone (02) 211 5855. Fax (02)281 1208.

Egffl Peter Isaacson PublicationshM| A.C.N. 004 260 020

PROFESSIONALCOMPUTING

Editor: Tony Blackmore.Editor-in-chief: Peter Isaacson.Advertising coordinator: Christine Dixon. Subscriptions: Jo Anne Birtles.Director of the Publications Board: John Hughes.

Subscriptions, orders, correspondenceProfessional Computing,45-50 Porter Street,Prahran, Victoria, 3181.Telephone (03) 520 5555.Telex 30880. Fax (03)510 3489.

EditorialTony Blackmore PO Box 475, Ringwood 3134 43 Craig Road, Donvale 3111 Telephone (03) 879 7412 Fax (03) 879 7570

Advertising4M Media Pty. Ltd PO Box 83, Armadale 3143 50 Northcote Road, Armadale 3143 Telephone (03) 822 4675 or 822 1505 Fax (03) 822 4251

Professional Computing, an official publication of the Australian Computer Society Incorporated, is published by ACS/PI Publications, 45-50 Porter Street, Prahran, Victoria, 3181.

Opinions expressed by authors in Professional Computing are not necessarily those of the ACS or Peter Isaacson Publications.

While every care will be taken, the publishers cannot accept responsibility for articles and pho­tographs submitted for publication.

The annual subscription is $50.

PROFESSIONALC0MPU1WGCONTENTS: NOVEMBER 1992PRIVACY NEEDS MORE THAN GOOD INTENTIONS:The nature of unauthorised releases ofpersonal data is analysed and technological measures are described which can help in restraining such abuses. But security cannot be achieved by technical means alone, and organisational measures necessary to complement the technical devices are discussed. 4

LETTERS TO THE EDITOR. 10

AUTOMATED SOFTWARE DISTRIBUTION PUSH: A common problem faced by IT management today is how to maintain or improve current service levels with a reduced budget, yet with an ever increasing installed base of personal computers.One area receiving increased focus from management is workstation management. 11

X/OPEN AND INTEROPERABILITY: X/Open specifications are endorsed by major organisations worldwide. In this article taken from the organisation’s ‘X/Open in Action’ series, interoperability and the specifications available are discussed. 13

TRIPS — A RIGHT SIZING PROJECT: This case study of the TRIPS project for the ACT Government's Department of Urban Services, was the winning entry for the ACS Case Study prize. 20

ACS in View: 23

CLIENT/SERVER TECHNOLOGIES - PART 1.: There is more to Client/Server technology than the anthropomorphic view common in most discussions of the topic. 25

PROFESSIONAL

COMPUTING

COVER: ‘Computers by Clipart’, a fragment of the computer represen­tations from the COREL DRAW clipart library.

2 PROFESSIONAL COMPUTING, NOVEMBER 1992

THE CONCEPT BEHIND OUR CASE PRODUCT

System Architect has the power to handle your most complex applications. And it's so easy to use, even beginners will be productive in no time.Use such methodologies asDeMarco/Yourdon, Gane &Sarson, Ward & Mellor (real­time), Entity Relation diagrams, Decomposition diagrams, Object Oriented Design (optional), State Transition diagrams, and Flow Charts.Create an integrated data dictionary/encyclopedia, and get multi-user support both with and without a network.

System Architect - C:\WINDOWS\SA2P3\HICORP\

I’l.inl Anr.rl-. I nlity Moili-I 1 nlIJI. IVnlumrt ,1?

Modify Data Element

JOatc Next Oebit

CASE Trends found System Architect "to be extremely easy to use ... with many features that are completely lacking in higher priced competitors." Toshiba found that "System Architect stood out from

many other prospects because it had the best core technology." System Builder called System Architect "truly a price/performance leader."Work in the Windows 3.1 environment, or OS/2 Presentation Manager,and count on context-sensitive help.Stay within your budget. At$4,300 System Architect is quite affordable — and it runs on almost any PC.

Take advantage of such advanced features as:• Normalization • Rules & Balancing• Requirements Traceability • Network Version• Import/Export Capability • Custom Reporting• Extendable Data Dictionary • Auto LevelingRely on a proven CASE product. System Architect has received rave reviews from the press and users. IEEE Software Magazine called System Architect "a useful, well-planned, affordable CASE tool."

Schema Generator now available!

SystemArchitectProloglc Pty Ltd75 Federal Street, North Hobart, Tas 7000 ENQUIRIES: Fax: (002)34 2719

or Phone: (002) 34 6499

Microsoft \\TNDOWS\fereion 30 Compatible Product

System Architect logo is a trademark of Popkin Software & Systems Incorporated. IBM is a registered trademark of IBM Corp. Microsoft is a registered trademark of Microsoft Corp.All prices and specifications are subject to change without notice at the sole discretion of the company. Product delivery is subject to availability.

Supporting IBM's AD/Cycle

PRIVACY & THE LAW

Privacy needs more than good intentionsAny organisation with a will to permit unauthorised access to information, or a lack of will to prevent it, can readi­ly undermine all other measures by tolerating one faulty link in the chain.

Organisations in both the public and private sectors have shown themselves to be unable to exercise effective self- restraint.

Governments which seek to protect their citizens against abuse of individuals’ information privacy interest are left with no option, other than to establish legislative standards, empower a permanent watchdog, and make officers and directors personally responsible for action and inaction which results in significant abuse.

Roger Clarke

By Roger Clarke

BETWEEN 1990 and 1992, the NSW In­dependent Commission Against Cor­ruption (ICAC) conducted an investiga­tion into allegations concerning widespread

unauthorised access to personal data.It concluded that “information from a vari­

ety of State and Commonwealth government sources and the private sector has been freely and regularly sold and exchanged for many years... A massive illicit trade in government information ... has been conducted with ap­parent disregard for privacy considerations, and a disturbing indifference to concepts of integrity and propriety ... Laws and regula­tions designed to protect confidentiality have been ignored ... (Even where criminal sanc­tions existed), information ... has been freely traded” (ICAC 1992, pp. ix, 3, 4).

The Commission found 155 identified indi­viduals to have engaged in corrupt conduct (p 92-94), and 101 others in conduct liable to allow, encourage or cause the occurrence of corrupt conduct (p.94-95).

Many of these were private investigators, who facilitated the trade in personal data. Many others were employees of government agencies who passed data to unauthorised re­cipients.

Some substantial corporations, listed in Ex­hibit 1, were also found to have been directly involved.Exhibit 1: Substantial CorporationsWhich ICAC found to have engaged in corruptconductCiticorp Australia Ltd Toyota Finance Aust. LtdWhich ICAC found to have engaged in con­duct liable to allow, encourage or cause the occurence of corrupt conductAdvance Bank Aust. Ltd; Government Ins. OfficeANZ Banking Group Ltd; Manufacturers’ Mutual Ins. LtdCommonwealth Bank; New Zealand Insur­ance

National Australia Bank; NRMA Insurance LimitedWestpac Banking Corp; Custom Credit Corp’n LtdMayne Nickless Trans Mgt; Esanda Finance Corp’n Ltd Telecom Australia

Which ICAC found to have been sources of unauthorised releases of data NSW Dept of Motor Transport,(now Road and Traffic Authority)Australian Customs Service; Australia Post NSW Police; Department of Immigration Prospect County Council; Department of So­cial SecuritySydney County Council; Health Insurance CommissionTelecom; Credit Reference Association

This article is taken from a paper written at the invitation of ICAC, to consider the tech­nological and organisational measures neces­sary to protect personal data against unauthor­ised access.

The paper addresses only that narrow pur­pose. It therefore remains silent about the many other aspects of a comprehensive strate­gy for information privacy protection, such as data collection, data retention, public access to information about data practices, and subject access to data about themselves.

It even excludes discussion of topics closely related to data security, and in particular data integrity, data quality, and how and why data access by third parties is authorised.

The paper commences by presenting an analysis of unauthorised releases of personal data, drawing upon material throughout the Report and in the summary (pp. 157-162). The remainder of the paper discusses techno­logical and organisational measures whereby unauthorised release can be minimised. The interdependence of the two kinds of measure is stressed.

The need is underlined for data security strategy and procedures to be established at both the levels of individual organisations and of Government.

4 PROFESSIONAL COMPUTING, NOVEMBER 1992

An analysis of unauthorised releasesThere is a variety of ways in which data can reach a person

or organisation not authorised to receive it. The release of personal data involves an action by a person performing a role, with one or more motivations, and on behalf of one or more beneficiaries. Categories of Role, Motivation and Beneficiary are shown in Exhibit 2, with those categories which are in themselves unauthorised shown in boldface type.

A release is unauthorised if any of the role, motivation or beneficiary is in an unauthorised category; for example, a disclosure is unauthorised if it is by an employee acting on behalf of an authorised recipient, under a data interchange agreement, but to an organisation which does not have author­ity to receive the data. Categories of unauthorised access which are documented in the ICAC Report are identified with an asterisk.

It is noteworthy that no instance appears to have come to light in ICAC’s investigations in which new access mechanisms were involved; all cases involved active breach of safeguards by the staff of an organisation authorised to directly access the data, negligence (in the sense of failure to implement safe­guards), and/or exploitation of the inadequacies of safeguards.

A data security strategy must of course address the risk of active breaches by outsiders using ‘high-tech’ approaches. The empirical evidence provided by ICAC’s investigations makes clear, however, that strategies must also address breach by outsiders using simple techniques, and abuse by insiders of the power they have as a result of the positions they occupy.Exhibit 2: The release of personal data — a classificationschemeRole— a person normally with access to the data:— acting with actual authority to access the data, as agent for an organisation authorised to have access★ acting with wrongly presumed authority to access the data, as agent for an organisation authorised to have access★ acting without authority to access the data:★ as principal★ as agent for an organisation not authorised to have access— a person normally without access to the data:★ with sufficient knowledge of access mechanisms, and:★ who acquires an Id★ who assumes an Id— who establishes new access mechanisms:— by “hacking” using a dial-in or network connection— by wire-tapping Motivation(s)— monetary or similar consideration— fulfilment of a data interchange arrangement★ a matter of principle

main(){priritf( "%s%s%s%s%s%s%s%s%s%s%s%s%s%s",An-------------------------------------------------------------------------------------------------------------------- ",

"\n Hello World"\n Friendly Specialist Software Developer ","\n ","\n Graduating 1992 M.App.Sci. RMIT ","\n (plus 2 years professional experience) ","\n ","\n seeks challenging career with forward-looking company ", "\n for 1993"\n"\n ","\n call: fred (03) 547-5702"\n____________________________________________________ ","\n \"perceive those things that can't be seen\" ");

After 12 years in the public sector Tom

Stianos joined SMS

Tom Stianos

“The public sector gave me valuable project management experience on large and important initiatives and diverse senior executive assignments in a number of enterprises.Consulting has opened the door to a wider range of clients and interesting opportunities. One guiding principle has remained constant for me: a commitment to quality and a business focus in IT endeavours. This made SMS particularly attractive.”Tom’s background in business as well as IT made him ideally suited to bridge the gap between the two. We need more people like Tom to continue our steady growth.If you would like further details call Phil Johnston in confidence.

SMS Consulting Group Pty LtdGround Floor, 441 St Kilda Road

Melbourne VIC 3004.Fax: (03) 820 0002 Phone: (03) 820 0899

Level 6,90 Mount Street,North Sydney, NSW 2060.

Fax: (02) 955 5796 Phone: (02) 954 4729

Also an Office in Canberra

PROFESSIONAL COMPUTING, NOVEMBER 1992 5

The one generic strategy which has little place in a personal data security strategy is tolerance of errors and abuses.

Beneficiary/ies— an organisation authorised to have access to the data• an organisation which does not have author­ity to access the data• the employee or contractor who accesses the dataNotes.1. Bold type indicates unauthorised categories2. An access is unauthorised if any aspect of it is in an unauthorised category3. An asterisk indicates categories document­ed in ICAC (1992).Alternative data security strategies and measures

Exhibit 3 identifies a range of approaches which can be adopted in establishing a strate­gy whereby data security risks can be man­aged. To implement a data security strategy, an appropriate set of measures must be select­ed from the wide range of control procedures that are already documented and well under­stood, or which can be devised to meet partic­ular needs.

A (far from exhaustive) list of generic mea­sures is shown in the Appendix to this paper. It is not suggested that all of these are needed in any particular organisation; some are alter­natives, some are more expensive than others, and some are only applicable in particular circumstances.Exhibit 3: Generic Risk Management Strategies• Proactive Measures• Avoidance• Deterrence• Prevention• Reactive Measures• Detection• Recovery• Insurance• Non-Reactive Measures• Tolerance

Proactive data security strategies can be im­plemented through such measures as the fol­lowing:— the potential for security breaches will be reduced if data, particularly sensitive data, is not retained, or is not collected in the first place (Avoidance);— staff can have communicated to them the obligations which they have to keep data se­cure, and the sanctions which apply if they breach those obligations, and publicity can be given to disciplinary action, dismissals and prosecutions (Deterrence);— access to data can be controlled through restricting access to precisely those people who have need to access the data in performance of their functions; staff can be trained to appreci­ate the sensitivity of data; measures can be undertaken to maintain staff morale, responsi­bility and loyalty at a high level; and staff who have demonstrated untrustworthiness can be removed from positions which involve access to sensitive data (Prevention).

Examples applicable to reactive strategies include the following:

— logs of data accesses can be maintained, exceptional accesses and exceptional patterns can be searched for, and investigations can be instituted (Detection);— injured parties can be compensated directly by the organisation whose actions give rise to the grievance (Recovery); and— injured parties can be compensated from a common pool paid into by all organisations of a particular class (Insurance).

The one generic strategy which has little place in a personal data security strategy is tolerance of errors and abuses. This is because the data-holder is in a poor position to judge the degree of sensitivity of data to each of the many data subjects.

The granularity of data security measuresDiffering degrees of protection can be

achieved, depending on the rigor with which security measures are implemented. The effec­tiveness of login-id protections, for example, vary from meaningless to substantial.

To be more than merely a placebo, login-ids must have data access restrictions associated with them which are appropriate to the func­tions the individual performs.

For example, an attempt by a staff member to access data about a person whose address is outside the staff-member’s legitimate area of geographical interest should be subject to ad­ditional control measures, such as exception logging followed by investigation of staff- members who exhibit a pattern of out-of-area accesses, and on-screen warning, to the staff member to that effect.

Moreover, login-ids should not be the only filtering mechanisms to restrain data access; the location of the workstation at which the person has logged in should also be a criterion.

An effective data security strategy must also embody control mechanisms over login-id us­age. Multiple concurrent uses of the same login-id should be subject to controls, at the very least exception reporting and investiga­tion.

So too should significant variations in pat­terns of use, and use from unusual locations, especially distant ones. Log-ids should be dis­abled during periods of absence, and after sig­nificant periods of non-use. Use by persons other than the individual to whom it belongs should be actively discouraged, through the organisation’s disciplinary mechanisms.

Beyond the limited protection given by lo­gin-ids, organisations need to give serious con­sideration to the use of a token which staff- members must use in order to gain access to the system.

Although it is tenable to design such a sys­tem using magnetic-stripe cards, chip-card (smart-card) technology is potentially superi­or. A further alternative is physiological or “biometric” forms of identification, which can make the use by a person of another person’s login-id still more difficult.

Even with chip-cards, passwords are neces­sary, to ensure that the person in possession of the token also knows something that only the owner of the card should know. But even pass­words provide very limited protection, unless

6 PROFESSIONAL COMPUTING, NOVEMBER 1992

Equipnent

Stake Body Truck Stake Truck us Liftqatc Stead Cleaner Uater Containdent Systen Uater TankElectricity Generator Grout MixcrxPunp Sampler: Size/Typc Augers: Size/Type Concrete Bit Diamond Core Bit Special ftjqer Bits Cookie Cutter/Size Wheel BarrowHydro punchHydro punch Screens/T ips Safety Equipment Leuel

Tyuek Gloues Boots

Next Freu Find Top L

II Casli'ial DP LT SP Sand

Schedule

tIEPCQ ftSB ttonsgenent System — 752 pm

Recording Artists ■■Monday/ August 18,‘1992.

-| Recording Sales (-

1 ; Choose this selection to skip past this question, j Vou can only skip one question in each game.

? X

26 of 121 questions 6 of 13 on this topic

Ivi t’-i S of 6 pk'yer-EAngela D.

hr n f p m f j f f r? f." r h jr 102 In the nouie "2001: A SpaceP Odyssey," uhat was the proper f name of the black nonolith?P Answer: Player 5 Score

e r p r p r n r shouBi: p r p r p r p r g

• Sports3«/. right

-E D0V1 8 Seuenties Triula U

Need to schedule drilling rigs, supplies and crews to find environmental contamina­tion? With DataBoss, you can. This 13-file, 14-screen system was built in 18 hours.

Want to make a slick-looking system to track recording artists, sales and royalty payments? With DataBoss, you can.

Can you customise how your applications work and look, like this wild game? With DataBoss, you can. Note the online help.

lignl enerate/ConnilelE^n■ L tilitiesi ilel S Shell uit

eneration Mode estore Settings aue Settings

ditor Honpiler

orderontrol Letter ctiue Prompt

Inactiue

-DataBoss 03.53 tCP Kedwell Software 1991-

DataBoss Colors

Menu Systen

Highlight BarSettingsDatabaseMenuReportFilter

ulti-User

-Auailable Memory = 199976-

DataBoss was even used to build itself!

Want to build C or Pascal database applications in minutesf?

With DataBoss, you can.Amazing product cuts your development time in half, builds relational database systems in Pascal or C

Wth DataBoss, you can build rela­tional database applications in minutes.

For more complex applications, your time savings will be even more dramatic.

As one of our users reported last week, “I just finished a project with DataBoss in a day and a half. Before DataBoss, it would have taken me a month.”

Developer Stephen Moore writes, ‘‘Data­Boss has saved my company hundreds of hours of programming time and thou­sands of dollars.”

Data Based Advisor magazine agrees: “Using DataBoss is a lot faster than writ­ing applications from scratch, even with a good selection of libraries.”DataBoss builds systems for you

DataBoss generates all parts of your application: menus, files (with file link­ing), data entry screens, reports, utilities.

Your applications look professional, with multi-window data entry, table entry, memo fields, scrollable fields, mouse sup­port, look-up tables, context-sensitive help, and even an online manual.

Byte reports, “The applications gener­ated are very sophisticated, with all the usual features, including full field editing,

DataBoss Xbase EngineThis add-on module lets DataBoss Cusers build C language database appli­cations that are dBase compatible. Itproduces .DBF files, .NDX indexes.Only $99; call for more information!

field default and validation checking, user- defined queries, automatic index mainte­nance, customisation capabilities and user- defined error messages.”No extra language or macros to learn

DataBoss writes programs in customis­able Pascal or C. Then, with your com­piler, it creates executable applications. You get customisable source code to the libraries and files that DataBoss uses to generate your programs. So you can change the libraries, or use your own.

If you're a veteran programmer or a novice, DataBoss lets you choose your level of involvement with source code. You're supported by DataBoss menus, on­line help, and our friendly support staff.Maintenance tools for your applications

Your applications evolve with time. So DataBoss generates systems documenta­tion for each part as you build it. You have all the information when you need it.

If the structure of your database changes, DataBoss lets you update existing data without writing a single line of code.

There are no runtime fees or royalties, so you can develop as many systems as you want. You don't owe us another cent.Be an “Ace” network programmer

DataBoss builds both single-user and networked systems. Just choose “Multi- User;” DataBoss adds record locking and other network functions. It’s that simple.

Use your favourite compiler. DataBoss C works with Turbo C/C++, Borland C/

C++, Microsoft C/C++, QuickC. Data­Boss Pascal works with Turbo Pascal.

Call today and give a big boost to your productivity.

|------------------------------------Get DataBoss now!Speed your development with the software that PC Week called “the most complete database creation system around.”_ Yes! Rush me DataBoss 3.5 for $795!

□ C version or LI Pascal versionI_I Yes! Send me the DataBoss Xbase

Engine for $99 (requires DataBoss C)

□ Please send demo disk and info. □ 3.5" disk □ 5.25" disk

Name____________________________Company_________________________Addres s___________________________

CityP.Code

Phone_________________________Amount enclosed: $_____________I'm paying by:Cheque__ Visa__ MasterCardNumber__________________________Expiry date_________________Signature_________________________

Mail or fax coupon (or phone) to:

Kedwell SoftwareACN 010 752 799

P.O. Box 122, Brisbane Market, QLD, 4106(07) 379 4551

Fax (07) 379 9422©1992 Kedwell Software. All rights reserved. DataBoss and

[ Kedwell are trademarks of Kedwell Software. Ad Code A01

Technological measures alone can never be a sufficient implementation of a data security strategy; they must be complemented by organisational measures.

they too are subjected to controls.A significant literature exists concerning

password selection and password compromise (eg Jobusch & Oldehoeft 1989, Riddle et al 1989). Some of the key requirements are that individuals be automatically forced to change their passwords periodically, and that there be active measures in place to discourage trivially discoverable passwords, eg one which are ex­cessively short, repetitive, or spell common words or names (especially the name of the person concerned or their login-id).

And yet the simplest ways in which work­mates’ passwords can be discovered are to look in their top, right-hand drawers or on the sides of their workstations, express curiosity over whether they use a “sneaky” code, watch them key it, or just ask them.

Technological means must be considered, whereby the risk of captured passwords can be addressed. One example is keying dynamics, whereby not only what is keyed is tested against pre-recorded data, but also how it is keyed, eg the delays between the keys being struck.Application to weaknesses identified in the ICAC report

Focussed as it necessarily was on matters involving corruption, the ICAC investigation may not have discovered all the ways in which personal data is leaking from the organisations concerned, and certainly did not address all of the databases which are accessed in an un­authorised manner.

Nonetheless, it is important to asses the extent to which the ICAC-revealed sub-set of abuses could be addresses by conventional or readily contrived data security measures.

People normally without access to the data are gaining access by acquiring or assuming an id, most commonly by telephone-call into a location which does have access. Without studying the details of the particular cases, it is apparent that various counter-measures could be applied; for example:— no provision of data over the telephone to anyone (Avoidance);— provision of data only by call-back to ap­proved telephone numbers, or mail-out to ap­proved addresses (Prevention);— frequent changes of code-words (Preven­tion); and— detailed logging of all information provid­ed over the telephone, with subsequent inves­tigation of a sample of cases, and publication of information about such investigations (De­tection, Deterrence).

Cases in which persons who normally do have access, but who abuse their position of trust, are more challenging to control. Never­theless, various possibilities exist; for exam­ple:— imposition of the requirement that individ­uals record a reason for the access, or the identifier of the case on which they were work­ing (eg a file-number or transaction-number) (Prevention and Deterrence); and— investigation of all accesses in which no identifier is provided and of individuals who commonly access without providing an identi­

fier (Detection and Deterrence);— investigation of a random sample of access­es (Detection).

Finally, all of these measures require subse­quent action. Breaches must be detected. When a breach is detected, action must be taken to deal with the offender, and to publi­cise the sanctions applied to the oifender, thereby achieving a deterrent effect on others.

Sanctions can only be applied where there is legal, employment agreement or other con­tractual basis. Sanctions will only be applied where the organisation has a data security strategy, measures in place to implement that strategy, and a genuine commitment to en­force it.Technological and organisational measures

It is apparent from the above discussion that some protective measures necessarily in­volved automated activities, such as testing of login-ids, testing of the degree of logging of accesses and analysis of logs.

Caelli (1992) explains the large amount of work which has been undertaken in relation to the formalisation of security requirements in computer-based systems. Many other mea­sures, however, are, or involve, human ac­tions.

Technological measures alone can never be a sufficient implementation of a data security strategy; they must be complemented by or­ganisational measures.

Similarly, organisational measures can be easily forgotten, abused and subverted; tech­nology needs to be applied to address those human weaknesses. An organisation’s data se­curity strategy must comprise an integrated set of technological and organisational measures. Establishing a data security strategy and measures

Because of the importance of personal data security, a professional approach must be adopted. Objectives need to be defined, a strategy devised in order to achieve those ob­jectives, measures designed and deployed to implement the strategy, and a monitoring mechanism established and maintained to as­sess performance against the objectives, and modify the strategy and measures as neces­sary.

Under such key terms as risk assessment, risk management, contingency planning and accounting controls, a substantial body of knowledge has developed in recent years. Data security is seldom the sole focus of risk assess­ment; indeed neither is security generally.

Instead, risks to the continuity and quality of services and the integrity and security of both data and operations are generally all con­sidered in an integrated fashion.

In addition to specialist books on the topic, data processing audit texts provide discus­sions of the issues involved, and frameworks for establishing data security strategy and pro­cedures.Controls over Controls

The “corporate citizenship” philosophy claims that organisations can be expected to be responsible for the propriety of their own

8 PROFESSIONAL COMPUTING, NOVEMBER 1992

activities. Quite apart from the alarming con­trary evidence in the ICAC Report, many oth­er instances have become public during the last few years in which corporations and gov­ernment agencies have acted in manner more cavalier than responsible.

For many, if perhaps not all, organisations, an external control is necessary to ensure that internal controls are created and maintained.

Though “industry self-regulation” philoso­phy claims that the excesses of corporate transgressors can be controlled through self­regulation, either by the marketplace (trans­gressors will be found out by and disciplined by customers, who will take their trade else­where) or by their peers (the relevant industry and.or professional association will feel itself and its other members to be disadvantaged by the transgressor’s actions, and will have and use the power to discipline them).

In competitive industries, the evidence is that neither of these mechanisms provides ad­equate protection against abuses of personal data.

Unless stimulated to do so, organisations will not spend the money and effort necessary to protect personal data. Industry self-regula­tion must be bolstered by statutory require­ments on all organisations. Such requirements will only be meaningful, however, if they are enforced.

Overseas experience has demonstrated quite clearly that leaving enforcement to data subjects (by suing transgressors in the courts) is largely futile (Flaherty 1989). If it judges the personal data privacy of its citizens to be a matter if importance, the Parliament of NSW must not only establish statutory require­ments, but establish, empower, and ensure funding for, a specialist body to enforce the law.Conclusions

As a result of its investigations, the ICAC Report makes a series of recommendations to the NSW Government (ICAC 1992, summar­ised at pp. 217-221). Of direct relevance to the question of data security are:3. Security of all information storage and re­trieval systems should be constantly moni­tored and where necessary updated and im­proved.4. Access to protected information should be strictly limited and an efficient system main­tained to enable the persons responsible for all accesses to be identified.5. Unauthorised dealing in protected govern­ment information should be made a criminal offence.

The report reinforces the neccessity for all organisations which handle personal data to establish data security strategies. Such strate­gies must not focus on “high-tech” intrusions at the expense of abuse of their position by insiders and straightforward breaches by out­siders.

This paper has further argued that each or­ganisation’s strategy must reflect the consider­able body of knowledge about data security. It must also incorporate a web of organisational measures, complemented by technological

measures. Because failure to implement or en­force key elements can be expected to compro­mise data security, the strategy must also in­clude internal control mechanisms to detect, and facilitate the investigation of, errors and abuse.

Particularly after the ICAC Report, it would be naive to expect that organisations will de­vise and enforce such strategies of their own accord. It is essential that the Parliament of NSW create external controls to encourage or­ganisations to comply with society’s expecta­tions, In the absence of externally imposed standards, and enforcement of those stan­dards, improvement in the present privacy- invasive practices cannot be expected.

Roger Clarke is Reader in Information Sys­tems, Australian National University; Director, Community Affairs Board, Australian Com­puter Society; Vice-Chairman, Australian Privacy Foundation.BibliographyCaelli W. (1992) “Evaluating System Security — Now A Requirement”. Two-part article in Professional Computing 78 and 79 (July/Au­gust and September 1992) 24-28 and 13-19 Caelli W., Longley D. & Shain M. (1989) “Information Security for Managers” Macmil­lan, 1989Flaherty D.H. (1989) “Protecting Privacy in Surveillance Societies”Uni. of North Carolina Press, 1989 ICAC (1992) “Report on Unauthorised Re­lease of Government Information” Independent Commission Against Corrup­tion, Sydney 3 Volumes, August 1992 Jobusch D.L. & Oldehoeft A.E. (1989)“A Survey of Password Mechanisms: Weak­nesses and Potential Improvements”2-part paper in Computers & Security 7 and 8 (1989)Longley D. (1989) “Data Security” in Caelli et al, 1989, pp.1-80 and 383-4.Riddle B.L., Miron M.S. & Semo J.A. (1989) “Passwords in Use in a University Timeshar­ingEnvironment” Computers & security 8 (1989) 569-579

I'P LIKE A WOKP WITH YOUR COMPUTE

For many, if perhaps not all, organisations, an external control is necessary to ensure that internal controls are created and maintained.

PROFESSIONAL COMPUTING, NOVEMBER 1992 9

Measures to reduce unauthorised accessLegal/Contractual Context— clear legal/contractual responsibilites

for all individuals with access to per­sonal data

— clear sanctions for acts or omissions which breach those requirements

— clear statement of those responsibil­ities and the applicable sanctions for breach

— initial, periodic and/or per-transac- tion communication to the individ­uals concerned of their responsibi­lites and the sanctions for breach

— initial, periodic and/ or per-transac- tion acknowledgement of awareness by the individuals concerned of their responsibilites and the sanctions for the individuals concerned of their re­sponsibilites and the sanctions for breach

Physical Access Restrictions— access only permitted from nominat­

ed workstations and/or sockets— nominated workstations only permit­

ted within physically secure areas— workstations disabled outside work-

hours— encryption used for transmission of

data on unprotected links

Logical Access Restrictions— access only by authorised staff, sub­

ject to control mechanisms:— documented need of that role to know

that data about that person— currency of that person’s fulfilment of

that role— allowing for transfers between roles,

holidays, etc.— workstations not usable without:— login id and password together— token (e.g. magnetic stripe or chip-

card) and password— programmed tests of validity of re­

quest against case/transaction being handled, and rejection of request or warning of notification to the data subject

— request permitted from many points, but display provided only on nomi­nated workstations and printout pro­vided only to nominated printers an­d/or addresses

Immediacy of warning as to the le­gality of the action andconsequences— at every login— after request is made and before data

is displayed— requirement of signature or confirma­

tion of id. prior to display of personal data

— warning displayed and printed with the data

— date, time, login-id and workstation- id displayed and printed with the data

Positive Acknowledgement

— notice of the access having occurred:— to the person who owns the login-id

which was used— to the data subject— to the system manager— to the installation manager on a rou­

tine basis, or only when an exception has been encountered

Audit Trail of Accesses— accessor-id— workstation-id— date-time stamp— identity of person whose data was

accessed— purpose of access (i.e. case-id and

relevance to case)— nature of data accessed— elapsed time the display remained on­

screen— whether a printout was requested— whether a copy-and-paste was taken

Audit Trail Analysis— investigation of a random of a sample

of accesses— search for and investigation of:

— exceptional instances (e.g. first use, first use after a long period of inac­tion, access out-of-area, failure to proceed when signature or id. re­quested)

— exceptional patterns of use (e.g. fre­quency of access, or access out-of- area).

LETTER TO THE EDITOR

Dear Sir,I hesitate to add even more verbiage to the ACS membership issue but I am prompted to do so by the “ACS has never done anything for me” attitude of some previous correspondents.

Would it too trite to ask what these correspondents had done for the ACS? Or for their fellows in the industry? Or for the advancement of knowledge in the profession? One of the hallmarks of members of groups from craft-based guilds to professions is that they have been expected to put something back by way of assistance to those following them.

If all of those who scan ACS publica­tions “searching in vain for something of interest” would, instead, put pen to paper and provide us with something of interest I would suggest that we may all be the richer for the sharing of their knowledge and experience. It surely

could not be the case that they really have nothing to offer us.

Speaking as a member of an IFIP Working Group, I would draw readers’ attention to the fact that were it not for a body such as the ACS, Australia would have no means of being a contributor to the world scene in the profession. We all take pride in the achievements of our Olympic athletes without carping that a broken record really does not alleviate the unemployment or increase our GNR The Working Group of which I am a member is concerned with end- user education and our contribution is valued by representatives of countries as diverse as Hong Kong, Israel, Nepal and Zimbabwe.

Too many members of our profession in Australia are extremely parochial in their outlook. Computing is an interna­tional profession. To participate in the activities, we must have a national pro­

fessional body. We must present a face to the rest of the world which says that we take the profession seriously. If we wish our profession to be taken seriously and not subsumed into accounting or engineering, we must have our own pro­fessional publications in which we en­courage contributions by those who feel they are extending the body of our knowledge or the wealth of our experi­ence.

It does nothing for our country nor our profession for people to sit on their nethers and to complain that they don’t want to be in the ACS because it is not providing them with precisely what they want at the moment. We would all be better off if more of us sprang off our tails and made a tangible contribution to our profession other than by writing whinging letters to the editor.

Peter Juliff Victoria

10 PROFESSIONAL COMPUTING, NOVEMBER 1992

WORKSTATION MANAGEMENT

Automated software distribution push

IN THE past, most companies resolved workstation management problems by simply hiring as many people as necessary to support the workstations. This meant that

an organisation typically needed a specific number of support people for a specific num­ber of workstations.

To make a substantial impact on this equa­tion, IT management must automate a num­ber of workstation management tasks and effectively leverage their support resources to allow them to manage a larger number of workstations per support person.

Many of the tasks are so closely interlocked that for an organisation to automate one task and fully gain the benefits of reduced worksta­tion management costs, it must also concur­rently automate others.

Software distribution is one of the tasks that has received significant focus over the past 18 months. To gain the full benefit of automated software distribution, an organisation must be able identify where workstations are located and what is on the workstations, in order to establish distribution lists for an application or system software to be sent to the correct workstations.

It is also necessary to be able to lock a workstation configuration in place to ensure the success of a subsequent software distribu­tion update.

To effectively implement software distribu­tion, two other dominoes must fall — asset management and workstation security.Market recognition of software distribution re­quirements

During the past 18 months we have seen a number of vendors respond to customer de­mand for software distribution. Legent has ac­quired Spectrum Concepts; Novell has ac­quired Annatek Systems; Lotus and IBM have announced new products.

Microsoft has also indicated its intention to provide a software distribution component as part of Windows NT.

In an effort to quickly gain market share, IBM has aggressively priced OS/2, but has also recognised that this is not the only signifi­cant barrier that must be overcome for OS/2 to gain market share.

This is illustrated by IBM’s announcement earlier this year of its plans to beef up Net- view/DM to flow out OS/2 via software distri­bution so as to reduce the cost of implementa­tion yet increase the speed of installation.

It would be interesting to find out how many of the 10 million Windows 3.0 licences and one million OS/2 licences have actually

been installed, since getting this software to the installed base of PCs is a real headache without effective automated software distribu­tion.

Although no official statement has been made regarding Novell’s future product devel­opment plans for Network Navigator, the soft­ware distribution product recently acquired via the recent acquisition of Annatek, it would make sense to exploit Network Navigator as a distribution mechanism for Netware and DR DOS.

Focus on the total cost of ownershipOne of the key factors creating so much

interest in software distribution is its ability to impact the total cost of software ownership.

A common problem faced by IT management today is how to maintain or improve current service levels with a reduced budget, yet with an ever increasing installed base of personal computers. One area receiving increased focus from management is workstation management.

Total Cost of Software

$1,200

$1,000 -

$800 - CO

= $600 -oQ$400 -

$200 -

$0 ■-

1

Ongoing Management

Installation

~ Cost Of Software

2 3 4 5

Years

Organisations are focusing on reducing costs by negotiating better prices on software or capping the total investment in software by purchasing site licences.

However, this only addresses part of the problem, as illustrated by the diagram. The purchase price of software is usually only a minor part of the cost over a three to five year period.

This can be graphically illustrated by look­ing at the total cost of purchasing, installing and maintaining either OS/2 or the combined Windows/DOS operating systems.

The OS/2 operating system in Australia costs around $100 to purchase and comes on 20 X 3.5‘ diskettes. It takes an hour to install and configure.

Given that the average IT professional’s charge rate is $50-$ 100 an hour, the total cost of software investment has just doubled.

If we consider that over the next three years it may be necessary to upgrade or re-install this software, it is quite conceivable that the

Dr Richard Presser, managing director of Melbourne-based Distributed Data Processing discusses some of the driving forces behind the push for automated software distribution as a way to reduce costs and increase efficiency in workstation management.

PROFESSIONAL COMPUTING, NOVEMBER 1992 11

Arthur Erlich of Novell with ownership cost could be two to three times the Richard Presser. original purchase price.

Packaging Software For DistributionThe ability to exploit a software distribution

system is going to add a new dimension for IT management when evaluating future software purchases.

Today, much of the software in common use does not clearly separate data from pro­gram code. This can mean that to distribute configuration updates, it may be necessary to distribute the programs as well.

Given that many applications such as Word and Excel now occupy as much as lOM-bytes, it is undesirable to transmit the complete ap­plication just to change a configuration op­tion.

This problem becomes most apparent when software has to be updated across WANs, where band width and associated costs remain at a premium.Client/Server

The current trend towards downsizing to client/server applications across large num­bers of workstations will lead to software dis­tribution becoming a mandatory requirement.

This is due to the need to simultaneously update both the client and the server applica­

tions across many workstations in many loca­tions concurrently.

Automated software distribution is the most cost effective way to avoid having the wrong version of a client program talking to the wrong server program.

Facilities Management/OutsourcingIn recent years, we have seen organisations

begin to outsource many of the functions of their IT departments including development. To be competitive, suppliers bidding for these contracts endeavour to leverage their people and reduce costs by using a range of develop­ment productivity tools.

These techniques are now beginning to be proposed by vendors bidding for the facilities management of an organisation’s worksta­tions.

Suppliers are able to significantly reduce the number of support people required, and hence their costs, by utilising automated software distribution.

It should be noted that this generally only holds true if taken over a two to five year period, due to the up-front costs. However, significant benefits are achieved in subsequent years.

Never the less, as mentioned previously, in some circumstances such as the large scale rollout of client/server applications, there may be no other way of suppqrting them than auto­mated software distribution techniques.

SummaryThe driving forces behind software distribu­

tion are many and varied. It is necessary for organisations to understand the full impact of any software distribution technology they are planning to implement.

It is only now that some of the really hard issues in workstation management, of which software distribution is one, are surfacing. These issues, if left unresolved, will slow down or even halt the ability of many organisations to roll out workstations in volume and exploit the price-performance advantages of PC tech­nology for line-of-business applications.

Call for PapersAustralian Computer Society (Queensland Branch) Conference on “Overcoming isolation: The Hu­man — Computer connection”. July 23-25, 1993 Townsville, North Queensland.The focus of the conference is divided into the following areas:1. Communication Technology2. Education and Training3. Computer Technology

The program committee of the Queensland Branch Conference invites interested authors to submit papers.Schedule of Submission of Papers28 February 1993 Final abstract submis­

sion date30 March 1993 Authors informed of ac­ceptance or rejection.Correspondence to: ACS Queensland Branch, Program Committee, PO Box 135 Aspley, 4034, Australia. Tel: 61 7 263 7864 Fax: 61 7 263 7020.Seventh Australian Software Engi­neering Conference (ASWEC’93)ASWEC is a national forum for sharing ideas, experience and development in the field of software engineering jointly sponsored by The Institution of Radio and Electronics Australia, the Australian Computer Society, and the Institution of Engineers Australia.

Original papers for oral presentation are invited in all areas of Software Engi­

neering practice and management. Pa­pers should be on research or experience in software development, process meth­od or tool development, education and management.

The programme will include abstract presentations, for which short (15 min­ute) contributions are sought.

A 250-word summary should be sub­mitted to:The Conference Secretariat, C/- IREE Australia PO Box 79, Edgecliff NSW 2027, Australia or Facsimile +61 2 362 3229.Due date for receipt of full paper/ab­stract April 30, 1993.Notification of acceptance July 9, 1993. Due Date for camera-ready copy for in­clusion in Proceedings August 6, 1993.

12 PROFESSIONAL COMPUTING, NOVEMBER 1992

OPEN SYSTEMS

^/Open® and Interoperability

X/Open, founded in 1984, is a worldwide, independent open systems organisation dedicated to develop­ing an open, multi-vendor Common Applications Environment (CAE) based on de facto and internation­al standards. Specification of the Common Applications Environment is achieved through close coopera­tion between users, vendors and standards organisations worldwide. X/Open specifications are endorsed by major organisations worldwide. In this article taken from the organisation's X/Open in Action’ series, interoperability and the specifications available are discussed.

By Petr Janecek

AT PRESENT, interoperability is at the forefront of demand in computing. Now is therefore a very appropriate time to offer a guide to “X/Open and Interoperabi­

lity”. At the same time, interoperability is an immensely complex subject. This article is not a treatise on details of communications and other aspects of interoperability. It is aimed at corporate decision makers who wish to get a brief orientation about:• what X/Open understands by interoperabi­

lity;• what it has done and does to make intero­

perability happen;• how developers can design an open systems

strategy by endorsing XPG specifications; • how procurers can devise a procurement

strategy based on X/Open branded prod­ucts.

What is an “Open System”?An open system is one which conforms to international computing standards — and is available from more than one independent supplier. In other words, it should be both “standard” and “multiple sourced”.

In this context, a system might be either a complete computer system, or a discrete hard­ware or software component of such a system.

To measure whether a system is open, it is important that both the “standard” and the “multiple source” criteria are used. Genuine openness and the vital goal of interoperability cannot be assured in technical and commer­cial terms unless both criteria are met.

In practical terms, an open system is ex­changeable for any other open system with a compatible set of standardised features.The Benefits of Open Systems An open systems strategy guarantees low risk and secure investment for the future through: • application portability — an application

running on one open system will run on

another (regardless of the supplier) with lit­tle or no change;

• interoperability — an open system will work with any other open system which has a compatible set of standardised features, re­gardless of who supplies it;

• secure investment in data — users’ data is one of their most valuable items; if it is held in an open system, it becomes portable to another open system and it also remains valid and usable through successions of fu­ture system upgrades;

• secure investment in people skills — staff skills and training remain valid, in a stable environment and predictable growth path;

• controlled development — users retain full control over the pace and costs of their in­formation technology operation: with open systems, planned and integrated develop­ment is a reality;

• stability — because it is based on imple­mented and stable international standards, an open system is inherently:★ of sound design,★ long-lived, not prone to unforseen up­dates and “fixes”,★ compatible with future releases;

• vendor independence — open systems do not tie the user to a single supplier;

• competitive pricing — open systems com­pete in a multi-supplier market.The alternative to open systems strategy is

to tie oneself into a proprietary system. In every respect, however, the user’s options for portability, interoperability with other sys­tems, data portability, future changes/up­grades, stability, and competitive pricing would then be severely restricted. As a result, choice of options, availability of products, and cost control over future change are all heavily compromised, if not totally lost. It is therefore clear that any advantage a proprietary system might offer must be very large indeed to out-

Any advantage a proprietary system might offer must be very large indeed to outweigh the benefits of open systems.

PROFESSIONAL COMPUTING, NOVEMBER 1992 13

%/Open®Suppliers can differentiate their products from other offerings by, for example, including better performance and/or extended functionality.

weigh the benefits of open systems.X/Open’s RoleThe corporate mission of /Open is “To bring greater value to users from computing through the practical implementation of open sys­tems”. In other words, X/Open strives to pro­mote practical open systems.

X/Open attempts to provide answers to needs of users of information technology. It therefore closely monitors user needs, hard­ware and software technology development, and system and software vendors’ require­ments.

It does this through its Xtra program (a very rigorous global survey, analysis and confer­ence of working groups), and publishes the results in a document called the Open Systems Directive (OSD).

X/Open then responds to the needs identi­fied in the OSD in three distinct ways:• It develops technical specifications which

meet the requirements achieving industry consensus, and compliance with interna­tional standards.These specifications are then published and

referenced in the XPG (X/Open Portability Guide). XPG describes the X/Open comput­ing environment called Common Application Environment (CAE).

Software developers then use XPG to devel­op products which take advantage of these standardised specification. (The title Portabil­ity Guide is, however, only kept for historical reasons and it has become a misnomer: X/O­pen is no longer only concerned solely with portability, and XPG is a series of normative specifications, not just advisory guides).

This does not restrict the freedom of the supplier to implement his software in the opti­mal way, nor does it limit his ability to add extra value. Suppliers can differentiate their products from other offerings by, for example, including better performance and/or extended functionality.• It issues Guides covering important topics

which are useful in developing, evaluating, procuring or managing open systems.

• It offers a branding program. A supplier’s software which demonstrates compliance with X/Open’s specifications in relevant ar­eas is awarded the X/Open brand. Both whole systems and individual software com­ponents can be branded. This X/Open brand is the user’s guarantee that a product is genuinely “open”.Products are branded with respect to a par­

ticular version of the X/Open Portability Guide. XPG3 was published in 1989 and XPG4 was released this year.

X/Open itself is not a formal standards body although individuals active in X/Open working groups also contribute to many com­mittees involved in developing formal stan­dards, on both national and international lev­els.

X/Open has adopted a standards policy which commits it to aligning its specifications with formal standards and, where appropriate, actively promoting its specifications as a base for international standards. This policy has

been successfully applied in X/Open’s rela­tionship with the IEEE POSIX activities where three of X/Open’s OSI APIs have been adopted as IEEE standards.

Through its XPG and branding program, X/Open offers the most widely accepted speci­fication platform in the industry on which open interoperable systems can be built.

Elements of Interoperability Portability, Connectivity and Interoperability

From the point of view of exchangeability, open systems can support two features of ap­plication software: portability and interopera­bility.

Portability of applications means that appli­cations written to a standard set of interfaces can easily be ported to all systems on which this set is implemented. Applications thus be­come largely independent of the computer sys­tem they run on.

Portability is essentially a feature of a stan­dalone system. In other words, a user benefits from application portability even if he only has one single open system, since his applica­tions become independent of the specific make of computer he currently uses.

Source code portability is achieved through standardising of Application Programming In­terfaces (APIs) which are source-level inter­faces, usually procedure calls, used by applica­tion developers.

The other feature of open systems, intero­perability of distributed systems and applica­tions, is an aspect of computer systems in communication. Interoperability is the ability of application processes to cooperate through exchange of information and services.

A minimum requirement for interoperabi­lity between two computer systems is that they speak the same standardised communications language, i.e. that they encode and decode data using the same protocols. However, the use of standardised protocols will only result in connectivity, or transfer of information, be­ing achieved. Interoperability is a higher qual­ity which often requires the use of additional conventions such as standardised data for­mats, programming interfaces, file name map­pings, etc.

The relationship between these features is shown at Figure 1.The Distribution

Interoperability is a desirable feature of products operating in networks of distribute entities. These entities are:• human users;• data;• computing power;• peripherals.

In order to utilise the distributed entities optimally, application software can itself be split into parts on different nodes in the net­work. It then becomes a distributed applica­tion.

Users of the network need to access its re­sources either:• transparently (without having to identify

where in the network the resource is locat­ed), or

14 PROFESSIONAL COMPUTING, NOVEMBER 1992

Interoperability

Application ApplicationPortabilityPortability

Application Programming Interfaces

NET

Operating SystemSystem

NET

Operating SystemSystem

Connectivity

Figure 1

• remotely (explicitly specifying the details ofthe remote resource being accessed).User requirements with regard to functiona­

lity have led to the identification of the main services which must be provided to the user on various nodes in the network.The OSI Solution

Interoperability of a number of disparate systems could of course be achieved by a set of one-to-one interoperability solutions, each tai­lored to one specific pair of systems. This is a costly method when many kinds of systems are involved. Standardisation is the obvious alternative.

For the past 20 years, the United Nations’ International Organisation for Standardisa­tion (ISO) has been working on a set of stan­dards jointly called the Open Systems Inter­connection (OSI). The OSI Reference Model provides the conceptual framework; its basic features are described in the standard ISO 7498 and its addenda.

The model divides communications into seven functional layers, where each layer relies on services provided by the layers underneath it. In the highest, i.e., the seventh layer, a number of standardisation application ser­vices are defined, for example:• Message Handling• File Transfer, Access and Management• Virtual Terminal Support and others.

Together with the definition of services, ISO also defines protocols which are ways of en­coding and encapsulating the information be­ing exchanged when a service is used.

OSI was, and still is, being designed to en­able interoperability between OSI systems. In­teroperability with the plethora of existing non-OSI services using various protocols is not its aim and cannot be achieved by it. In this model, in order to achieve interoperabi­lity, all systems must run OSI.X/Open and OSI

X/Open has a stated policy of complying with international standards. The Xtra pro­cess has also repeatedly confirmed that users require X/Open to do so. Therefore, X/Open’s long term strategy in the area of interoperabi­lity is based on OSI.

So far, the OSI standards describe services and protocols. They leave programming inter­faces non-standardised as an implementation issue. Since the use of standardised APIs pro­motes use of OSI-conformable systems, stan­dardised APIs are important not only for ap­

plication portability but also for interoperability. ISO has left it to the industry to fill this gap in the standard, and X/Open is leading this effort.Non-OSI Interoperability

For obvious reasons, services and protocols for exchange of information between non-OSI systems will not be specified as standards by ISO. Yet the market would benefit if specifica­tions of at least some of the important non- OSI protocols were published in a vendor- independent manner since that would facilitate emergence of multiple independent implementations and thus increase competi­tion. X/Open provides several such defini­tions.Coexistence and Migration

As OSI solutions are gradually being intro­duced, users are faced with the problem of preserving their investment in the non-OSI technology which is required to coexist with the OSI solutions. Also, organisations that have decided to migrate their networks and users to OSI face specific problems in making such a transition. Planners, managers and us­ers of networks who face issues of coexistence with OSI and migration to it, need advice from the industry on strategies, methods and tools for coexistence and migration of their current networks to OSI. X/Open provides such advice (see Chapter 5).

jt/Open*So far, the OSI standards describe service and protocols. They leave programming interfaces non-standardised as an implementation issue.

Proving InteroperabilityProfiles

Customers’s experience makes them auto­matically suspicious about vendors’ claims that cannot be tested either directly by the customer or by an independent third party, testing schemes and branding of products are therefore important aids to competitiveness in the market.

Since it is limited to a single system, testing of portability is a relatively straightforward matter. Testing of connectivity and interopera­bility, however, requires a whole host of issues to be addressed.

To start with, a technically good specifica­tion, to which a product is to conform, has to be available. Given the importance of OSI, the most serious attempts at conformance testing have been done for OSI protocols. The OSI model provides for seven layers of communi­cations service, each implemented by its own layer and protocol. At each layer, there are service alternatives and protocol options. There are thus many choices to be made in specifying how to interconnect two open sys­tems.

Implementators of OSI products soon rea­lised that, in order to achieve connectivity between products from different vendors, some specific options had to be arranged by the entire industry. They therefore founded regional workshops for implementators:• OSIIMplementators’ Workshop (OIW) in

the USA,• European Workshop for Open Systems

(EWOS) in Europe, and

PROFESSIONAL COMPUTING, NOVEMBER 1992 15

%/Open&Although theoretically possible, in practice it is very difficult to prove that two products will interoperate under all circumstances.

• Asian-Oceanic Workshop (AOW) in the FarEast.These workshops work towards agreements

on functional standards, better known as pro­files, which they then submit to ISO. When harmonised for all the three regions, these pro­files achieve the status of International Stan­dardised Profiles (SIPs).

Given an application and a type of network, a profile specifies a particular set of choices at all levels of the model which:• completely defines the communications,• supports the application, and• works over the type of network.

Thus the user needs only to select the profile corresponding to his application and network­ing requirements in order to guarantee con­nectivity between all his equipment as well as with all his communication partners, provided they have selected the same profile. Conformance Testing

A number of organisations have been founded by the industry to test conformance of a product to an agreed profile, such as:• Corporation for Open Systems (COS) in the

USA• OSInet in the USA, and• EurOSInet in Europe.

Conformance of a product to a profile is normally tested by examining how the prod­uct behaves when interacting with a “model” system which has been implemented by the testing organisation.

Testing of InteroperabilityAlthough theoretically possible, in practice

it is very difficult to prove that two products will interoperate under all circumstances. Only access to the source code for both, as well as systematic testing covering all the paths leading through each one of the two, would provide a satisfactory answer. Such an approach is, of course, prohibitively expensive and normally impossible for, among others, source copyright reasons.

Because of'the legal implications and diffi­culties in determining whose fault it is when two products do not interoperate, vendors are reluctant to give a guarantee of interoperabi­lity. However, schemes are being designed for ways of at least registering claims of interoper­ability between specific products.

The European organisation Standards Pro­motion and Application Group (SPAG) re­cently introduced the concept of product pro­files and the Process to Support Interoperability (PSI) which encompasses a methodology for the evaluation of interopera­bility as well as rules for arbitration in cases when two products certified by SPAG as inter­operable fail to interoperate.

Instead of testing, vendors sometimes use interoperability demonstrations to convince both themselves and their customers that their specific product will interwork with other ven­dors’ products. For such a demonstration, lo­cal networks consisting of dozens and perhaps even hundreds of systems are set up and spe­cific products are run on them (one example is the Connectathom used for demonstrating in­

teroperability of implementations of the Net­work File System). While giving th most ex­tensive indication of a product’s ability to interoperate, this many-to-many exercise has to be repeated every time a new product joins the LAN or a new version of a product ap­pears.

X/Open and OSI APIs

X/Open has produced industry consensus specifications of APIs to several of the OSI Application Services. While the primary bene­fit from standardisation of APIs is application portability, several of X/Open’s APIs actually also facilitate interoperability through defini­tions of associated packages of OSI object classes accessed and manipulated by these APIs. The use of standardised APIs also gener­ally promotes use of OSI-compliant systems and thus interoperability.

The relationship between X/Open’s OSI APIs and the OSI protocol stack is shown at Figure 2.

Three of X/Open’s OSI APIs were recently adopted as base documents for IEEE stan­dards and there is potential for the adoption of more.

The X/Open Common Application Envi­ronment currently incorporates the following OSI-related documents:The X/Open API to Directory Services (XDS)

A directory service provides information about objects in the network. The directory maintains an information base of attributes (e.g. name, address, creation date, etc) associ­ated with the objects. Objects can be refer­enced with a naming service or by a set of defining attributes. The director service will return requested attributes pertaining to the referenced object(s).

The XDS is the interface between a Directo­ry User Agent and an application program (which is the user), by which the application program can access the Directory Service.

The XDS is designed to offer services that are consistent with, but not limited to, the 1988 CCITT X.500 Series of recommenda­tions and the ISO 9594 Standard.

The XDS has been developed in collabora­tion with the X.400 API Association.The X/Open API to Electronic Mail (X.400)

The X.400 APIs specified in this document provide access to, and interconnection of, messaging systems whose architecture is in accordance with the CCITT X.400 Series of Recommendations and the ISO Message. Handling System (MHS) Standard. The X.400 Specification defines two interfaces to the functionality of an MHS based on internation­al messaging standards: the Message Access Interface (or Application API) and the Mes­sage Transfer Interface (or Gateway API). While they differ from each other in the type of messaging functionality they provide, both interfaces present a model whereby messages, reports, and probes are passed across the inter­face between the user of the interface, or cli­ent, and the provider of the interface function­ality, or service.

16 PROFESSIONAL COMPUTING, NOVEMBER 1992

Application APIs(xftam, xmhs, xmp, xdsj

XAP

XTI

Application Services (FTAM, MHS, CM1P, X.500)

Upper Layer Services(ACSE/Presentation/Session)

Transport Services(C!ass4, CLTP/CLNP/8802+X.25 & Classes 4, 2, 0/X.25)

4

3

21

Figure 2

The X.400 API has been developed in col­laboration with the X.400 API Association. The X/Open API to OSI-Abstract-Data Ma­nipulation (XOM)

Messages, probes and reports are represent­ed at the X.400 and XDS interfaces by data structures called OSI objects. The XOM gives definitions of these OSI objects and of the functions available for creating, examining, modifying or destroying them. The XOM is used, for example, by the XDS, X.400, XMS and XMP APIs.

The XOM has been developed in collabora­tion with the X.400 API Association.The X/Open EDI Messaging Package

OSI objects defined by the XOM are cate­gorised on the basis of their purpose and struc­ture into categories known as classes. Related classes can be grouped into collections known as packages. This EDI Messaging Package de­fines such a set of Electronic Data Interchange object classes. Using the EDI package, objects representing EDI information can be passed across the interface between client and service.

The EDI Messaging Package has been devel­oped in collaboration with the X.400 API As­sociation.The X/Open Message Store API (XMS)

This interface has been designed for opera­tional interactions with a Message Store. It uses facilities provided by the XOM.

The Message Store Abstract Service is a part of the MHS and is defined in the CCITT X.413 Recommendation. The Message Store acts as an intermediary between the Message Transfer System and the User Agent, its main function is to accept delivery of messages on behalf of a single end-user and to store the messages for subsequent retrieval by the end- user’s User Agent.

The XMS has been developed in collabora­tion with the X.400 API Asoocaition.The X/Open Guide to Selected X.400 and Di­rectory Services APIs

Using the XOM, XDS and X.400 API defi­nitions is not an easy task for an application programmer with little previous experience of OSI abstract data. This guide has been pro­duced to provide an introduction and tutorial which complements the three API specifica­tions mentioned. The tutorial is illustrated

through the use of selected ‘C’ language pro­gramming examples.

This Guide has been developed in collabo­ration with the X.400 API Association.The X/Open Management Protocol API (XMP)

OSI provides the Common Management Information Service (CMIS) as a means for performing management operations. Such op­erations are specified in terms of managed objects, which represent real resources that need to be managed. Objects are described using Abstract Syntax Notation (ASN.l).

The XMP interface provides access to the CMIS service and thus provides a mechanism for the performance of management opera­tions within a distributed environment. XMP uses the XOM interface. XMP also provides access to the Internet Simple Network Man­agement Protocol (SNMP).The X/Open Management Protocol Profile (XMPP)

To enable interoperability, it is necessary to provide profiles of communications protocols. Profiles specify the use of optional features and ensure that implementations use compati­ble sets of such options. International Standar­dised Profiles (ISPs) exist for OSI manage­ment.

The XMPP specification refers to these ISPs and thus requires that conformant implemen­tations conform to these profiles. It also refers to the relevant Internet specifications for the SNMP environment.

X/Open and Interoperability with Legacy Sys­tems

From the commercial point of view, the following are the most important legacy sys­tem types to consider:• Internet Protocol Suite (IPS) systems;• Personal computers;• IBM-compatible mainframes.

X/Open has been addressing problems of interoperability in all of these three areas. The Internet Protocol Suite The X/Open Guide to IPS (XGIPS)

The Internet Protocol Suite (IPS), popularly known as “TCP/IP”, is the current de-facto standard for interworking between non-OSI multivendor systems. It is not a formal stan­dard and its documentation in the Internet Requests For Comments (RFCs) does not have the rigour of a formal standard. It also allows a number of options for implementa­tion. Since implementors of IPS have chosen different sets of options in their specific prod­ucts, it is not easy to give general advice to users who wish to migrate from IPS to OSI.X- /Open has therefore produced the Guide to IPS which describes which IPS functionality can be found in the majority of current com­mercial IPS implementations. This descrip­tion also defines the IPS platform from which most of users will be migrating to OSI or at least coexisting with OSI.The X/Open Guide to IPS-OSI Coexistence and Migration (CoMiX)

x/Open&

Profiles specify the use of optional features and ensure that implementations use compatible sets of such options.

PROFESSIONAL COMPUTING, NOVEMBER 1992 17

x/Open®For Local Area Networks with personal computers, two distinct market segments have been identified by X/Open, one where personal computers are being integrated into an existing network of X/Open-compliant systems which are already running XNFS, the other where X/Open- compliant servers are being added to an existing Local Area Network consisting primarily of personal computers. Two different solutions have been specified for these two markets.

This Guide was produced by X/Open in order to help managers, network planners and implementors, and users to understand the issues involved when migrating from IPS to OSI.

CoMiX gives first a number of real-life sce­narios derived from market research through which the various user objectives and prob­lems are illustrated. Then, descriptions are given of the functionality available in IPS on one side and in OSI on the other side, so that a comparison can be made. (For the description of IPS functionality, CoMiX refers to XGIPS, the guide described above). Users are further advised on how to design their strategies. Chapters on techniques and tools describe the technical means of use for coexistence and migration. Finally, application of the tech­niques and tools on the scenarios provides a sanity check for value of the advice provided. The Byte Stream File Transfer Definition (BSFT)

One of the most used services provided by IPS is the File Transfer Protocol (ftp). In order to ease the migration of users from an IPS- based network to an OSI-based one, X/Open has developed the BSFT definition which pre­serves most of the interface and functionality of ftp but uses the OSI FTAM protocol pro­files underneath. Thus BSFT simplifies the simultaneous use of file transfer facilities in the IPS and OSI environments and facilitates the coexistence and migration between these two protocol sets. BSFT is fully conformant with the ISO/IEC Standard 8571, Information Processing System — Open Systems Intercon­nections — File Transfer, Access and Manage­ment. it uses the International Standardised Profile ISP 10607:1990, FTAM, part 3(AFT11-Simple File Transfer Service) and part 6 (AFT3 - File Management Service). The X/Open Network File System (XNFS)

The most widely used architecture for het­erogeneous transparent file access between system running IPS is the Network File Sys­tem, originally developed by Sun Microsys­tems, Inc. X/Open has recognised the position of NFS as a de-facto standard and published this specification as a temporary but complete solution to the problem of transparent file ac­cess between X/Open-compliant systems.

The XNFS specification defines:• the transparent file access service provided

by XNFS,• the protocols that support this service be­

tween X/Open-compliant machines, which can take the role of either servers or clients, and

• the differences in semantics of the XPG3 Volume 2, System Interfaces and Headers, and Volume 1, Commands and Utilities, when they are used “transparently” over the network using XNFS rather than locally.

The X/Open Management Protocol API (XMP)

The Internet Protocol Suite provides the Simple Network Management Protocol (SNMP) as a means for performing manage­ment operations. XMP provide access to the

Internet Simple Network Management Proto­col (SNMP) as well as to the OSI Common Management Information Service (CMIS). The X/Open Management Protocol Profile (XMPP)

In order to provide interopoerability, it is necessary to provide profiles of communica­tions protocols. Profiles specify the use of op­tional features and ensure that implementa­tions use compatible sets of such options.

The XMPP specification refers to the rele­vant Internet specifications for the SNMP en­vironment and thus requires that conformant applications should conform to these specifi­cations.Personal Computer Interworking

Most of the personal computers today run the DOS or OS/2 operating systems which do not conform to X/Open specifications. In lo­cal area networks, such systems can be con­nected as clients to an X/Open-compliant server. X/Openhas therefore developed speci­fications of protocols and interfaces which can be used by such servers.

For Local Area Networks with personal computers, two distinct market segments have been identified by X/Open, one where person­al computers are being integrated into an ex­isting network of X/Open-compliant systems which are already running XNFS, the other where X/Open-compliant servers are being added to an existing Local Area Network con­sisting primarily of personal computers. Two different solutions have been specified for these two markets.Protocols for X/Open Interworking: (PC)NFS

Where personal computer are being inte­grated into an existing network of X/Open- compliant systems which are already running XNFS, the (PC)NFS protocol is used. Its specification is a specific version of the XNFS one; it defines the protocol for communica­tion between a PC client running DOS OR OS/2 and an X/Open-compliant servier.

This version of XNFS specifies only those aspects required to implement a server for a single-user system client, and specific atten­tion is given to issues unique to PC interwork­ing, including authentication, remote print spooling extensions, and DOS file sharing and locking support.Protocols for X/Open PC Interworking: SMB

The Server Message Block protocol, origi­nally developed by Microsoft Corporation, is intended for use where X/Open-compliant servers are being added to an already existing Local Area Network consisting primarily of personal computers.IPC Mechanisms for SMB

Since systems in networks running SMB do not use interfaces defined in XPG, this specifi­cation defines interfaces to Inter-Process Communication, covering named pipes, mailslots and messaging. It further defines the necessary Server Message Block protocol ex­tensions for interprocess communication. Asynchronous Serial Links

For interoperability via asynchronous serial

18 PROFESSIONAL COMPUTING, NOVEMBER 1992

links, X/Open has defined in XPG3 Volume 7 a file transfer protocol identical to the Kermit protocol, as well as a set of features provided on X/Open-compliant systems for terminal emulators.CPi-C

For communication between X/Open-com­pliant systems and IBM-compatible main­frames, the protocol most in use is the propri­etary SNA Logical Unit 6.2. However, in order to ease integration between open sys­tems and such mainframes, X/Open has pub­lished the specification of the Common Pro­gram Interface-Communications (CPI-C), and PAI based on IBM’s definition.X/Open Transport Interface (XTI)

One of the solutions to the problem of coex­istence and migration of networks using differ­ent transport providers is to use in applica­tions running in such networks an API that is independent of any specific transport provid­er. The XTI has been developed for such a purpose. While it is concerned primarily with the ISO Transport Service Definition (connec­tion-oriented or connectionless), it may be adapted for use over other types of provider, and has indeed been extended to include TCP, UDP and NetBIOS.

• The Communication Service provide the ser­vices that allow applications to communi­cate reliably across the network indepen­dently of the underlying network topology, networking protocols or data representa­tions.

• The Distribution Services provide a set of services that support consistency in distrib­uted applications. These services address the fundamental issues in computing, in­cluding the processing model, the naming

Security Availability

Messaging

ObjectManagement

Application Services j-

Data

Management

TransactionProcessing

Windowing

Naming and Directory

Time SecurityMechanisms

SystemsManagement

Framework

j Communication Services |

RPC Services

Networking Services

3-5

—( OS Services

System Interfaces, Utilities. Commands and Headers

jc/Open®X/Open has published the specification of the Common Program Interface- Communications (CPI-C), and API based on IBM’s definition.

Manageability internationalisation

X/Open and Distributed Computing

X/Open has recently developed the X/Open Distributed. Computing Services Framework (XDCSF). The XDCSF is a comprehensive blueprint for a complete system environment that will allow open systems to better address the critical needs in enterprise-wide heteroge­nous distributed computing.

The framework specifies:• the required services;• the relationship between services;• the programming interfaces to services; and• the protocols and data formats that provide

interoperability between systems that sup­port the services.The Framework is organised as a set of four

layers, as shown in Figure 3. Each layer pro­vides a level of functionality in an enterprise computing network:• The Operating System Services provide an

environment for running distributed soft­ware on each node of the network.

Figure 3

model, the security model and the manage­ment model.

• The Application Services provide the en­abling distributed system software to sup­port the development of distributed applica­tions.While mainly concentrating on interopera­

bility of open systems, the Framework also includes services provided for interoperability with legacy systems.

This article, extracted from the ‘X/Open in Action’ series has been published with permis­sion. Information on X/Open and its publica­tions can be obtained from X/Open Publica­tions, P.O. Box 475, Ringwood, Victoria; Telephone 03 879 7412; Facsimile 03 879 7570.

The AuthorPetr Janecek was born in Czechoslovakia and is a Swed­ish national. He was educated at the Czech Technical Univer­sity in Prague, at the Czecho­slovak Academy of Sciences and at Lund Unviersity in Swe­den where he was awarded a doctorate in mathematical phyisics.

After some years working in

research at the Nordic Institute for Theoretical Atomic Physics and at CERN, he was employed by the ASEA Corporation, based in Vasteras, Sweden, on a number of engineering pro­jects which brought him to be more closely involved in com­puter processing techniques.

He worked with Ericsson In­formation Systems as Manag­er, Software Development Envi­ronment and became the

company's X/Open Technical Manager. In a broader role with Nokia Data, as the company's Director of Strategic Planning and Architecture, he chaired two X/Open working groups on data management and PC in­terworking as well as acting as the Nokia Technical Manager working with X/Open.

Petr Janecek joined X/Open Company in April 1989.

PROFESSIONAL COMPUTING, NOVEMBER 1992 19

CASE STUDY WINNER

TRIPS — A right sizing projectACS case study award winnerBy John Wright

IN 1990, the ACT Department of Ur­ban Services, Transport Regulation Branch, decided to replace their ex­isting motor vehicle registration, drivers

licence and parking infringement com­puter systems. The replacement would be an integrated system running in a UNIX “Open Systems”, multivendor, environment, with each component chosen to be the best available within normal budget constraints.

This objective was to be achieved by Transport Regulations becoming the Systems Integrator, and letting contracts for the supply and delivery of Computer Hardware, Networks, Workstations, Database and Applications software to vendors after evaluations. As a result there were seven major vendors in­volved in the project:• Host computers — Hewlett-Packard • Wide Area Network — ACT Govern­

ment Computer Service • Network routers — Ungermann Bass

(Cisco)• Local Area Network software — No­

vell• Workstations and cabling — Unisys • Relational Database Management

software — ITI• Applications design and development

— Computer Sciences of AustraliaProject BeginningsThe Department engaged the services of an external consultant to help with the preparation of the tender documents. These were to set the frame work for the subsequent project development.

A two stage tender process was adopt­ed by the Department. Expressions of interest were called for the supply of a Relational Database Management Sys­tem (RDBMS) running on an unspeci­fied UNIX platform, and for the design and development of application soft­ware written using the RDBMS select­ed.

From the twenty seven responses re­ceived, Requests for Quotation were sought from eight companies. Three short listed companies were then evalu-

John Wright, employed by Computer Sciences of Australia in Canberra, was Technical Director of the TRIPS project for the ACT Government’s Department of Urban Services. This case study was the winning entry for the ACS Case Study Award.

ated using a simulated development prototype. The evaluation team visited each company for a day and com­menced by presenting a 5 page design specification. After answering clarifica­tion questions, the development team had the remainder of the day to do the database design and build data entry and reporting applications. A significant design change was introduced late in the evaluation and had to be incorporated into the database and applications.

The completeness of the applications, and the ease with which the design change was handled were used in the overall tender evaluation process.

From this evaluation, Computer Sci­ences of Australia was selected to design and develop the applications. After sub­sequent negotiations between CSA and

the Department, the Relational Data­base Management System DBQ, sup­plied by Information Technology Inter­national (ITI), was chosen as the database environment for the system. System Configuration The system, as implemented, consists of a Hewlett-Packard HP9000/867 with 96Mbytes memory, 2 x 1.3 gigabyte disc drives, Novell Local Area Networeks in 3 of the 5 Transport Regulation office locations in Canberra, 100 Unisys disc­less workstations running Novell LAN Workplace for DOS with VT220 termi­nal emulation, 45 HP laserjet HIP print­ers, ACT Government Computer Ser­vices (ACT GCS) Wide Area Network using Cisco routers and a mix of Tele­com private and ISDN lines, DBQ RDBMS and applications developed by CSA. A second Hewlett-Packard HP9000/842 computer is used for de­velopment, testing and training pur­poses.

Total staff directly involved with the project were a Transport Regulation Project manager (on secondment from ACT GCS), 3 Transport Regulation user Team leaders, 1 CSA Project man­ager, 7 CSA applications design and de­velopment staff and 3 ACT GCS data conversion and development staff.

System DesignThe ACT Government’s agreement to comply with the Prime Minister’s 10 Point Road Safety Package, and its commitment to introduce the necessary legislation, dictated that the new system be operational from 2 January 1992. This was the non-negotiable deadline under which the project was developed.

An initial team of five staff from CSA commenced work with Transport Regu­lations in February 1991 to begin plan­ning the system. The Transport Regula­tion user community had very high expectations and enthusiasm for the new system. This turned out to have both a positive and negative effect on the development team.

A project plan for the development and implementation of the applications

20 PROFESSIONAL COMPUTING, NOVEMBER 1992

was developed and maintained through­out the project. The milestones of this plan were committed to by all develop­ment staff and the achievement of these intermediate goals contributed towards the overall success of the project.

The application development consist­ed of eight phases:• Development of Functional Specifica­

tions• Development of Detailed Design

Specifications• Construction of Applications• Integration Testing of Applications• User Acceptance Testing• Data Conversion• Implementation• Production, and post-production sup­

portThere was a formal signoff by Trans­

port Regulations at the end of phases 1, 2 and 5, and continuous user involve­ment during all phases except for Inte­gration Testing. Data conversion contin­ued in parallel with phases 2 to 5. A senior CSA consultant conducted a for­mal Quality Assurance review during phase 3, with a review during phase 7. This was followed by a half day project review in March 1992. This review in­volved Transport Regulation and CSA project managers, CSA account manag­er and key project and user team staff.

During the functional specification phase, it became obvious that the tender documentation estimates for data enti­ties, attributes and applications screens was a gross underestimate. The final fig­ures were 255 data entities (tables), 1700 data attributes (table columns) and 530 applications.

The functional specification docu­ment was divided into seven sections corresponding to each of the major ar­eas in the system.

These were• Vehicle Inspection• Registration• Licence• Parking• Finance• Menu and Security• General (common applications).

The document contained a descrip­tion of the system architecture and a description of each function that would be included in the final system. The document served to define the bound­aries of the system, so that as develop­ment continued, it would be possible to distinguish that which was agreed and that which would of necessity become an enhancement after implementation. During the functional specification stage, the Department issued system de­sign requirements (business require­ments) which were to have a great im­pact upon the design and construction of the system. In the post-project re­view, several of these requirements were

identified as having imposed a high de­gree of risk upon the project.Requirement 1Unlike the existing computer systems, a customer would only need to enter one queue to be served regardless of the type, or number, of transactions the cus­tomer wished to carry out. Customers are served by a counter officer using a workstation with attached laserjet print­er, which is used to print registration labels and certificates and drivers li­cences, both on preprinted stationary, or general receipts.

The transactions available to a cus­tomer are:• obtain one or more new vehicle regis­

trations,• renew one or more existing vehicle

registrations,• book a driving licence test,• obtain a drivers license,• change their name and/or address,• pay part or all of their outstanding

parking notices.These transactions, while all separate, were to be considered to be a unit of work, and so the customer was only required to pay at the completion for their transactions, much like a normal supermarket operation. The customer is able to use a combination of cash, cheques, and subject to a business rule, credit cards for payment of their trans­action.

During construction, the concept of a customer’s transaction would be further modified as user’s requirements pushed back the “commit” point of the Trans­action.

Thus the concept of a “Transaction” became defined as all those things that a customer could do and pay for at the one time. This imposed design con­straints upon such things as the han­dling of record locking, and user and system initiated rollbacks.

The whole system had to be centred around the requirements of the front­line counter staff, as the overriding busi­ness requirement was to service custom­ers as quickly as possible.Requirement 2.

All users would access the system via workstations located throughout Can­berra and connected to the host com­puter via local and wide area networks. All documents generated by the system, depending upon their type, had to be printed on the user’s local printer at­tached to their workstation, or on a work-group printer serving several workstations. These printers were also to be used from the workstations in an Office Automation role using DOS based word processing and spreadsheet packages.

After client review and signoff of the functional specification document, the

detailed design was commenced. This task was undertaken by the same team who prepared the functional specifica­tion. For this document, the detailed business rules were captured along with proposed screen layout for each applica­tion. Each function identified in the functional specification either became an application or was combined with one or more functions to become an application. Field level processing rules and constraints were also documented in so far as they were known.

During this phase it became obvious that some of the long established busi­ness rules of the Department depended for their validity upon the perceptions of different policy groups. There was also conflict between the three systems being merged that required clarification.

Added to this, the Department under­went a major structural and staffing re­organisation as the three functionally separate (but operationally interrelated) units were amalgamated into one new organisation.

Application Development DBQ was ported, by ITI, to the HP 9000/842 development computer in June and construction of applications commenced shortly afterwards. The de­sign team was augmented with addi­tional CSA and ACT GCS staff, and familiarisation with DBQ and relational databases, continued for those staff re­quiring the knowledge.

A mistake made by some projects, and repeated in this one, saw the core applications in each functional area commenced first. The reason for this was the keenness of the developers to each get an application “out the door” so that their user team could see what the new system would be like.

Several of these core applications contained some of the most complex business rules, and hence processing logic, in the entire system. For develop­ers new to relational database process­ing, the learning curve was very steep and had an initial impact on the project schedule.

In hindsight, the old adage hasten slowly should have been more rigorous­ly applied, with construction of simpler applications commenced first. Addi­tionally, because of the close working relationship between the developers and the user teams, the temptation to add, or agree to, a new users requirement, because it was “nice” or technically challenging, was at times too great. The project team had been warned of this danger by staff from Rob Thompsett and Associates early in the project, but the danger is always hard to appreciate until experienced.

The inclusion of extra features be­cause it would only take an hour, half

PROFESSIONAL COMPUTING, NOVEMBER 1992 21

day, etc., masked the fact that the extra feature might add unnecessary compli­cations when it came to integration test­ing. The addition of a simple, it will only take a minute, feature, cost signifi­cant time later in the project.

This is a lesson which needs to be learnt and re-learnt many times over by applications developers.

Whilst development was progressing, the Transport Regulation Project Man­ager was finalising the topology of the network. The simple requirement to be able to print documents at a user’s workstation printer, required a great deal of effort to implement. The print­ing model proposed by the applications team was that the applications would print either to a LOCAL printer or a work GROUP printer. The applications would know the class of printer re­quired, but not the specific printer.

The translation from the LOCAL or GROUP print request to the physical printer has handled by defining Novell print server queues as remote, BSD printers to HP-UX. This combination has worked in a robust, flexible manner and has relieved the applications from knowing the location at which they are being run or their destination printer.

Application TestingBecause of the tight schedle and non- negotiable deadline, user training and application testing were done in paral­lel. This obviously caused some prob­lems, and is a highly undesirable mode in which to operate. However, it did provide a large, and well controlled test environment in which to fine tune data locking issues in the applications.

DBQ provides row level locking and support for the ANSI (3H2-88-127) data concurrency model. The applications and database schema were designed to use the highest data concurrency level of the ANSI model.

Data ConversionIn parallel with the construction of the application, the data extraction and conversion from the UNISYS main­frame was planned and coded. This was undertaken by an ACT GCS officer who had extensive experience with the exist­ing systems.

The goal of the data conversion pro­cess was not only to move the data from to the RDBMS, but also to match, and where possible combine, name and ad­dresses from the different systems to form the one client on the new integrat­ed system.

Because of the length of time required to extract, convert, match and load the data, it was decided to close the Motor Registry offices to the public over the Christmas — New Year period. This

provided an eight day window in which to perform this task. Data loading was completed on 29 December 1991 and preliminary tests were conducted on 30/31 December in readiness for pro­duction on 2 January.

It had originally been scheduled to have a “production” database with a full load of data for use during integra­tion and acceptance testing, but due to difficulties with the data matching pro­cess this was not achieved.ImplementationA load test of the system was performed on 30 December which indicated that the target of 70 concurrent users would not be achieved with the current system configuration, but the decision had al­ready been taken by the steering com­mittee to proceed, so the system went into production at 8.30am on 2 January 1992.

There was a large build up of custom­ers because of the extended close down, and although all operators had been trained over the preceding two month period, they where not totally familiar with the new system. An additional complication was that the ACT was in the middle of an election campaign and the issue of motor vehicle registrations was an election issue. Despite these problems, the system processed approx­imately two thirds of a normal days work on its first day.

It was obvious that the system was inadequate to handle the load being placed upon it, and Hewlett-Packard ex­pedited the upgrade of the host comput­er to a HP9000/867. This has largely eliminated the problem and work is continuing to increase the performance of the entire system.

Lessons LearnedIt has been said that a project gets the Steering Committee it deserves. This project was aided in meeting its goal by having a very competent Steering Com­mittee which was able to focus on those high level activities which had to be performed so that low technical tasks were unhindered.

A multi-vendor, open systems, solu­tion to a business problem is possible with the hardware and software that is currently available in the market place. However, there is a need for a strong “systems integrator” to ensure that all hardware and software functions cor­rectly in the target environment. If the systems integration role is to be per­formed by the client organisation then the organisation must have sufficient staff, with the required skills set, allocat­ed full time to the project.

Applications developers must contin­uously resist the desire to aim for 100 percent technical excellence and to add

extra little features. The little bit extra (“Creeping Excellence”) costs any pro­ject dearly, either in time or money or both.

• I am not aware of any metrics of ac­curate rules of thumb to determine pro­cessing loads imposed upon UNIX sys­tems and networks by applications. UNIX kernel re-configurations to tune the system are empirical, and based upon “trail and error”.

Accurate system performance moni­toring tools are essential to gain a feel for how the system is behaving. The HP-UX product, GlancePlus, is run continuously and visually monitored by the operations staff to manage system run queue length and response time for the users.

Implementation of an easy to use, yet rigorous, application change control sys­tem is essential. UNIX provides two mechanisms for version control, SSCS and RCS. The latter was used success­fully on this project.

Data conversion is an integral com­ponent of any systems redevelopment and the accuracy with which it is per­formed is critical to the success of any project.

ConclusionTRIPS was conceived the by the De­partment as a means of updating their existing computer systems. They put into place an effective steering commit­tee which only concentrated on high level issues. Very pragmatic business re­lations were established between the Transport Regulations and CSA project managers, and the technical team were allowed to concentrate upon the design and construction of the system. This separation of responsibility, guided con­tinuously by detailed project plans, I believe, contributed significantly to the success of this project.

1994 Churchill Fellowships

for overseas studyThe Churchill Trust invites applications from Australians, of 18 years and over from all walks of life who wish to be considered for a Churchill Fellowship to undertake, during 1994, an overseas study project that wilt enhance their usefulness to the Australian community.No prescribed qualifications are required, merit being the primary test, whether based on past achievements or demonstrated ability for future achievement.Fellowships are awarded annually to those who have already established themselves in their calling. They are not awarded for the purpose of obtaining higher academic or formal qualifications.Details may be obtained by sending a self addressed stamped envelope (12 x 24 cms) to: ^The Winston Churchill Memorial Trust 218 Northbourne Ave, Braddon.ACT 2601. ||

Completed application forms and reports | Iffrom three referees must be submitted by 'flSunday, 28 February, 1993.

22 PROFESSIONAL COMPUTING, NOVEMBER 1992

ACS IN VIEW

South Australian Branch ReportThe Information StateSA Branch conference was held at Wha­lens Inn, Encounter Bay from October 9 to 11, with 53 delegates and 13 accom­panying guests.

Delegates included 5 Indonesian lec­turers attending a training course at the Tea Tree Gully College of TAFE under the sponsorship of the Centre for Inter­national Education and Training.

The conference was opened by the Hon. Lynn Arnold MP (Premier of South Australia) after which Mr Gra­ham Ingerson MP (Deputy Leader of the Opposition) also spoke. It was grati­fying to discover that both sides of SA parliament are in agreement about basic IT proposals within SA such as the MFP it is the methodologies being used to implement the proposals where their philosophical differences occur.

The balance of Friday afternoon was taken up with a panel discussing the conference theme - ‘South Australia - The Information State’. The panel con­sisted of Bruce Guerin (CEO, MFP Aus­tralia), Geoff Dober (General Manager, Information Services, CUB and Nation­al President of ACS), Des Scholz (Gen­eral Manager, Commercial, AOTC) and Michael Ward (MIS Manager, Austra­lian Submarines Corp.).

This presentation engendered some lively discussions which tended to rein­force the belief of the delegates that, whilst economic conditions are not rosy at the moment, the current initiatives within South Australia will ensure a healthy future for the state, providing it utilises and builds upon existing and proposed services and structures.

Other papers presented during the conference were principally about South Australian projects including:□ the Jindalee Operational Radar Net­

work (JORN)□ the management of change while es­

tablishing an IT system for the Uni­versity of South Australia

□ ‘Group Decision Support Systems’ (using IT to support meetings)

□ the role of IT within SA government□ an application developed to create a

program for sporting competitions□ development of Pen Based Systems□ multi-vendor architecture within SA

government; and□ effects and benefits of the Justice In­

formation System Other sessions included:

□ management and communications workshops

□ use of communications strategies to gain a competitive advantage

□ Current legal issues for IT profession­als; and

□ IT management and employment for the 90s.ACS President, Geoff Dober outlined

‘The State of the Society’ whilst Les Irvine, Branch Chairman, spoke about ACS involvement in South Australia. Both speakers indicated the strengths and weaknesses of ACS while showing how the Society is fulfilling its role with­in the IT industry and positioning itself for the future.

Besides the excellent business sessions there were many informal discussions during the session breaks when attend­ees took advantage of the opportunities to do their networking.

Friday night offered another opportu­nity for delegates and guests to socialise with pre-dinner drinks and nibbles fol­lowed by the Opening Dinner.

On Saturday night the after dinner speaker, Dr Brian Sando, was entertain­ing with his anecdotal experiences as Chief Medical Officer for the Australian Olympic Team at Barcelona.

By the time the conference closed at midday on Sunday, the attendees were unanimous in agreeing that Brian Ken­nedy, conference chairman, and his ded­icated committee had produced a con­ference which had maintained, if not advanced, the high standard of previous Branch Conferences.

The technical sessions were informa­tive and well presented, the social activi­ties magnificent and the venue delight­ful.

ACT Branch ReportChairman warns members on Trade in Confidential InformationTom Worthington, Chairman of ACS Canberra, has warned, in the October branch newsletter, Canberra members to check that their systems are adequate to protect confidential information.

He was commenting on a NSW inde­pendent Commission Against Corrup­tion report on a trade in confidential information.

He said there had been considerable discussion in the press on what aspects of this trade were technically illegal and

under what laws.“While the law may be unclear, the

ethics for IT professionals are not. Initi­ating for assisting with the unauthorised release of confidential information from an IT system would place an IT profes­sional in breach of their professional ethics.

Also, failing to take reasonable steps to prevent the unauthorised release of information may lead to a charge of pro­fessional misconduct against an ACS member.

ACT TAFE will be Canber­ra Institute of Technology in 1993The ACT TAFE has announced a name change to the Canberra Institute of Technology. This follows support for a name change and increased TAFE re­sources from the Canberra Branch of the ACS. Mr Norm Fisher, Director of ACT TAFE, has thanked the Canberra branch for its support.

Victorian Branch NewsTransforming your business?One of the constraints on business trans­formation is that it often involves “breaking the rules”. You may be able to see breakthrough business process redesign but be unable to get any sup­port for the rule even from those who stand to gain most from the redesigned processes.

Your colleagues need help in seeing what you are trying to achieve — one way to open their eyes is to invite them to accompany you to our half-day semi­nar on Business Transformation with the highly acclaimed Dr Margie Olsen. It is a half-day at most reasonable cost — and it could provide the break­through you need to get support for some process redesign.Project Management Workshop If you have never attended a Rob Thomsett Workshop, then you have an opportunity to do so now. Rob ran a one-day executive briefing for us earlier this year and many attendees requested this workshop as a follow up. If you are self employed you will appreciate the 3- day course only requiring you to sacri­fice 2 working days. If you are em­ployed, your boss should appreciate your willingness to sacrifice a Sunday!

Contact Denise Martin 03 690 8000.

PROFESSIONAL COMPUTING, NOVEMBER 1992 23

PART ONE

Client/Server TechnologiesThere is more to Client/Server technology than the anthropomorphic view common in most discussions of the topic.

Doug Rickard, director and senior technology consultant with Software Technologies Pty Ltd, Brisbane.

by Doug Rickard

ALTHOUGH client/server technologies have become a common buzz word in the last couple of years, examples of client/server technology can be found in sys­

tems as early as the early 1950s. It has been the advent of widespread networking of com­puters which has made the technology particu­larly relevant in the last few years.

Before entering into this discussion we need clear definitions. The words ‘client’ and ‘serv­er’ describe the relationship which exists be­tween two entities. They may be programs, they may even be specialised hardware sys­tems; it does not matter.

A ‘server’ provides a service which a client or clients may access.

A ‘client’ is any entity which calls on a ‘server’ to perform a function on its behalf. Clients initiate all communication with a serv­er.

The concept that the client is the originator of the interaction with the server is vitally important for a true understanding of client/ server systems. The server only responds to requests from clients. In some cases servers may interact simultaneously with multiple cli­ents, that is a many-to-one relationship may exist. In other cases there will be a fixed one- to-one relationship only. In many cases one server may well be a client of yet another server.

It is unfortunate that present marketing messages promoted by many uninformed sales people has tended to confuse the true meaning of these terms. We often see labels such as ‘server’ referring only to specific hard­ware boxes. Unfortunately this fails to identify the actual function provided by the box. In some cases it is a file server, in others a com­munications server, in some cases an applica­tion server, or even one box performing sever­al functions.

Another example of the misunderstanding that can occur is often associated with a pre­mier example of client/server technologies, the Xwindows graphics display system devel­oped by MIT as part of Project Athena. Even the important ‘DMR Report on Open Sys­tems’ a few years ago chided MIT for ‘getting it wrong’ in regards to which was the client, and which was the server. In fact MIT got it right DMR got it wrong! It must be remem­bered that the screen device is in fact a ‘dis­play server’, i.e. it provides a display service to

many different client applications which have a need to produce a user visible output, all at the same time.

Current implementations of client/server computing tend to fall into two main catego­ries — resource sharing, and distributed com­puting.

Resource sharing systems are typified by the more simplistic LAN systems which only pro­vide file, disk, or print sharing. The applica­tion may not even know it is using a remote resource, and applications do not have to be network aware at all.

Resource sharing client/server systems obtain their advantages from the ability to share the resources of a single server amongst many clients simultaneously. With file servers for example, a single copy of an application on the server disk may be loaded into the memo­ry of many clients at once. This minimises the amount of disk storage required to store appli­cations, and improves the ability to manage the licensing and updating of applications. Similarly, with common data such as tele­phone lists, company data, etc., only one copy needs to be kept at a single central location. If ever that data needs to be updated, all users have access to the new data simultaneously, thus overcoming the problems of stale or re­dundant data on systems. With print servers, only one printer may suffice for a whole work group, instead of a printer for each user. In this case, often a more expensive but more capable printer may be able to be afforded.

Distributed computing is typified by many of the SQL based remote data base access systems now available. The application itself is network aware, in that the application is constructed of a number of independent mod­ules each of which may be running on a differ­ent processor in the network. By distributing the intelligence of an application in this way, steps can be taken to reduce the actual amount of data that needs to traverse the network. This may not be of great importance in a small high speed LAN, but when expensive lower speed long distance communications lines are used in a WAN environment the benefits can be very important.

The benefits to be gained from client/server computing depend not only on whether it is resource sharing or distributed applications, but on a variety of other factors as well.

Client/server based technologies can mini­mise the amount of network traffic thus im­

24 PROFESSIONAL COMPUTING, NOVEMBER 1992

proving network effectiveness. They can im­prove the effective utilisation of host systems by minimising the number of processes run­ning. They can lead to designs which tend to maximise the re-useability of code through modularisation. They can reduce the long term code maintenance costs because of the resulting simplification in design and the ‘sep­aration of form from function’.

Client/server computing can improve the performance of computing systems by distrib­uting an application in such a way that the different parts of the application can run on the most efficient platform for that part. Why waste a mainframe CPUs clock cycles doing dumb screen painting when it should be doing important data base operations instead? For example, client/server technologies allow an application to be broken up in such a way as to unload character intensive tasks such as screen and keyboard handling from the main­frame and put it on an alternate platform, such as a PC, where the high interrupt load of complex screen handling rightly belongs.

This division of an application into cooper­ating client and server tasks can take place at a variety of levels, and in a variety of manners. In fact, one server task is often a client of even yet another server task. The tasks may be on one machine, or they may be spread over a number of networked machines. The commu­nication between the tasks may utilise a vari­ety of methods, such as Remote Procedure Calls (RPC) and message passing. A task on a mainframe could just as easily be a client of a server task running on a PC. There is no pre­ordained relationship that has to exist between the client and the server platforms.

Client-Server benefits can be summarised as:o Improved maintainability by code

modularisation.o Improved utilisation of machine resources, o Improved utilisation of communications

bandwidth.o Improved application performance.

Over the longer term, the cost of code main­tenance can often greatly exceed the original application development costs. Increased modularity of the original code has proven to be one of the best methods of minimisation of these longer term code maintenance costs. Client/server technologies provide an excel­lent methodology for the appropriate division of an application into its logical parts, and reduce maintenance costs dramatically.

The division of an application into compo­nent parts that are optimised for the particular platform on which they will be executed pro­vides a good example of symbiosis. The previ­ous example of the separation of data base access and screen display onto separate plat­forms is a good example of this. Mainframe systems that are designed to handle data in 32 bit or 64 bit ‘chunks’ efficiently are not very efficient at handling individual characters, whereas even very simple 8 or 16 bit architec­tures can provide very reasonable perfor­mance when painting a screen.

Client/server systems can minimise the

amount of network traffic over a wide area network where bandwidth is relatively expen­sive. This can be achieved by improving the effectiveness of each packet that crosses the network, or by the intelligent use of local in­formation in addition to information that comes across the network, e.g. local caching of forms.

The overall effect of the above three bene­fits is to provide improved application perfor­mance, both perceived and real. The per­ceived performance improvements can be achieved by methods such as the local echoing of keystrokes by the client, instead of the echo occurring over a network. Real performance improvements too can be achieved by the above methods, but the adoption of several of them in conjunction can lead to even more improved techniques. For example, it is usual­ly easy to make server tasks ‘multi-threaded’ and improve performance dramatically.

In current computer methodology, the con­cept of modular and/or reusable code holds sway. The idea of ‘separation of form from function’ so as to minimise code maintenance problems is also very prevalent. Client/server technologies tend to combine many of these concepts in one. When designing distributed applications, in deciding what parts of the sys­tem should be performed in the server, and what parts should be done in the client, a very important part of the modularisation has already been achieved in a very natural way. With a well designed server, different types of clients on many different hardware platforms and operating systems can all access the one server simultaneously. This is really code re­use at its best.

Host Host Host

NetworkNetwork Network

Application

File Server

TerminalEmulationProgram

Application

DistributedApplication

(Client)

DistributedApplication(Server)

(A) (B) (C)

PC to HOST connectivity typically falls into three different categories:(A) shows the PC running a terminal application program connected to a standard terminal based application on the host. Every keyboard character, and every screen display goes over the network, including keyboard echo.(B) shows the PC running a standard PC application, but connected to a host providing a file server function. If record ‘n’ is required by the PC application, every record from 1 to ‘n’ may be passed across the network, with records 1 to ‘n-I’ being discarded.(C) shows a distributed application with part of the application running on the PC (Client), and part of the application running on the host (Server). If the PC client requires record ‘n’ then it may format an ISAM or SQL type request and send it to the server. The server will retrieve the specific data requested and return only this over the network to the client. The client will handle all display formatting, etc.

PROFESSIONAL COMPUTING, NOVEMBER 1992 25

CO MPUTING SERVICESMEMORY EXPANSION

PRICES AT OCT 16TH, 1992SIMM1MBx9 70ns $461MBx8 80ns $411MBx8 100ns $374MBx9 70ns $1604MBx8 80ns $152256x9 80ns $14(FOR SIP ADD $1)TOSHIBAT1000SE 2MB $155T2000SX 4MB $260T1600 2MB $140

T31000SX 4MB $235T3200SX 2MB $135T3200 3MB $230T5200 2MB $150T2000SX 8MB $525T44/6400 4MB $345DRAM-DIP411000 70 $5.00256 x 4 70 $5.0041256 80 $2.001MB x 4-80Z $23.001MB x 4-80S $23.00

DRIVES3Vfc PANAS 1.44 $78SEAG 89 MB 14m $320 SEAG 107 MB 15m $350 SEAG 130 MB 12m $430 SEAG 245 MB 12m $780

CO-PROCESSORS387/33/40 $165/195387/20/25 $150/155SX 20125 $120/12510/2^ ----287/10/20 $90/95

Sales Tax 20%. Overnight Delivery. Credit cards welcome.

PELHAM Tel (02) 980 6988 Fax (02) 980 6991

PROFESSIONAL

COMPUTINGTHE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY

ADVERTISINGENQUIRIES

4M MediaPhone (03) 822 4675

WARRANTY & INDEMNITYAdvertisers and/or advertising agencies upon and by lodging material with the publisher for publication or authorising or approving of the publication of any material INDEMNIFY the publisher, its servants and agents, against all liability claims or proceedings whatsoever arising from the publication and without limiting the generality of the foregoing to indemnify each of them in relation to defamation, slander of title, breach of copyright, infringement of trademarks or names of publication titles, unfair competition or trade prac­tices, royalties or violation of rights or privacy AND WARRANT that the material complies with all relevant laws and regulations and that its publica­tion will not give rise to any rights against or liabilities in the publisher, its servants or agents and in particular that nothing therein is capable of being misleading or deceptive or otherwise in breach of Part V of the Trade Practices Act 1974.

ADVERTISING CONDITIONSAdvertising accepted for publication in Professional Computing is subject to the conditions set out in the rate cards, and the rules applicable to advertis­ing laid down from time to time by the Media Council of Australia. Every advertisement is subject to the publisher's approval. No responsibility is taken for any loss due to the failure of an advertisement to appear according to instructions.

The positioning or placing of an advertisement within the accepted classifi­cation is at the discretion of Professional Computing except where specifi­cally instructed and agreed to by the publisher.

Rates are based on the understanding that the monetary level ordered is used within the period of the order. Maximum period of any order is one (1) year. Should an advertiser fail to use the total monetary level ordered the rate will be amended to coincide with the amount of space used. The word ‘advertisement’ will be used on copy which, in the opinion of the publisher, resembles editorial matter.

The above information is subject to change, without notificaction, at the discretion of the Publisher.

IMAGINE A DATABASE

SO GOOD YOUR

PROSPECTS PAY TO BE ON IT.PIPMAIL is the direct marketing division of Peter Isaacson Publications Pty Ltd offering target market accuracy with our up-to-date mailing lists. Phone for details of our database of 300,000 companies with over 400,000 executives.Phone Linda Kavan for details on 008 335 196 or (03) 520 5648 or fax (03) 521 3647

PIPMAIL accuracy through experienceTHE DIBECT MARKETING DIVISION OF PETER ISAACSON PUBLICATIONS PTV LTD

These diariescan save you

time and money.

Executive diary with padded cover.

Only $45.50 each.

Handy, matching pocket diary.

Only $15.00 each

' Executive diary refill for last year’s cover.

Only $32.50 each.

To find out how, ask for the 1993 Peterhouse Diary brochure. Make a free call to 008 335 196 or, from

Melbourne, call (03) 520 5590.

The 1993 Peterhouse Executive Diary System. From Peter Isaacson Publications Pty Ltd

Bookings - Tel: (03) 520 5555, Fax: (03) 521 364726 PROFESSIONAL COMPUTING, NOVEMBER 1992

Modular code minimises maintenance costs generally by concentrating specific functions in one module. For example, by concentrating screen I/O in one module, the necessary changes to accommodate a new device can be made all in one place, instead of in many places throughout the application. Even with this approach however, if a change is made, often the whole application may need to be recompiled and relinked. If however that screen I/O module was in fact made a separate client task of the main application, it would be necessary to recompile and relink only the client module. Even better, it becomes a sim­ple matter to have a number of different client modules for different screen devices, e.g. char­acter cell terminals and Xwindows devices, the appropriate one being activated as a func­tion of the display type.

As a simple example of the type of general modularity that is possible, Software Technol­ogies Pty Ltd uses a single standard client module skeleton written in ‘C’ which can be compiled to run on MS-DOS, OS/2, Unix, and VMS operating systems, and can utilise TCP/IP or DECnet networking protocols on any one of those platforms. This allows client systems to be developed in a minimum of time, no matter what the platform is to be. It is also easy to move the actual client task on to different platforms with minimal or no code changes.

Dividing up the functionality and spreading it across different hardware platforms can be used to maximise the efficiency of each sys­tem. For example, a large mainframe with an extremely big data base is very inefficient when it comes to manipulating individual characters on a screen to produce a pleasing display. On the other hand, a PC will be able to provide exceptional individually tailored graphics based display screens, but will be a poor data base machine.

One of the reasons for this is that the actual machine architecture and instruction set of computers is usually optimised for a specific area of operation. The ability to perform high speed pipelining is very important in a machine designed for high performance data base operations, but can in fact become a dis­advantage when performing simple character manipulations on a display screen where the interrupt load is liable to be very high.

If however the two systems are combined in a client/server relationship, with the PC doing all keyboard input handling, input checking and verification, and final output display, and with the mainframe just receiving a data base request which it looks up and returns as a single packet to the PC, each machine is now doing just what it does best. This can give an organisation optimal efficiency and productiv­ity from its computing resources.

To correctly understand the improvements that client/server computing can make to net­work performance particularly, a few different networking scenarios must be examined first.

One of the earliest methods used for gaining user access from over a wide geographical area to applications on a centralised host was by

the use of‘terminal’ networks. There are many doubts today as to whether these should really be called ‘networks’ as they did not run any higher level protocols at all. In the simplest case, each terminal was connected via a modem and a telephone line to the host. Typi­cal speeds were 1200/75 bps. This was very expensive when a number of terminals existed at any one point, and this saw the introduc­tion of complex multiplexing systems, such as ‘stat muxes’ to allow a number of terminal data streams to be combined over one modem link.

With this mode of operation however, for each character typed at the keyboard or dis­played on the screen, on average one character went in each direction over the network. There was little overhead added by the multi­plexing system. Indeed, in many cases the overheads reduced as actual traffic increased.

Later, protocol based networks such as TCP/IP, DECnet, and SNA were introduced. For simple terminal connectivity over the wide area network, TCP/IP would use the ‘Telnet’ protocol, and DECnet would use the ‘CTERM’ protocol. These posed an entirely different problem because there was a basic overhead for the protocol independent of the actual amount of traffic.

With a text editor as an example, each char­acter typed at the keyboard must be immedi­ately sent to the application. In turn, the appli­cation must analyse the character, and in the majority of cases echo that character back to the remote screen. Each character typed re­sults in a protocol packet being sent in each direction. With most protocols, the minimum packet size is between 40-64 bytes long. For each character typed at the keyboard, up to 128 characters have to traverse the wide area network! All this over a wide area network where bandwidth is very expensive! Now this is an extreme case, but is important to under­stand just how inefficient some of the systems in widespread use today are in terms of net­work performance.

In a LAN with character cell terminals con­nected via terminal servers the problems are normally not so severe because network band­width is much cheaper. However if the LAN is widely distributed using lower speed bridges, the same problem of the number of effective data characters per packet can arise. This is where it is interesting to compare two of the different terminal connectivity protocols in common use. TCP/IP Telnet still has the same problems because the ratio of data to protocol overhead can be inefficient. However the DEC LAT protocol does show some important benefits. LAT can multiplex multiple terminal sessions into one network logical link and so achieve an efficient ratio of data characters to protocol overhead. Telnet only uses one logi­cal link per terminal session and can be much less efficient. In one real example, Telnet was producing nearly 40 times the measured net­work load as LAT over the same network for the same application. What was disconcerting was that this exceeded even the calculated dif­ferences based on the protocols. This has not

Dividing up the functionality and spreading it across different hardware platforms can be used to maximise the efficiency of each system.

PROFESSIONAL COMPUTING, NOVEMBER 1992 27

It should be obvious that any method whereby we can reduce the network traffic with distributed interactive applications can provide considerable cost benefits.

Author: Doug Rickard MACS, is a director of Software Technol­ogies Pty Ltd, Coopers Plains, QLD 4108.Part 2 of this article will be in the February issue of Pro­fessional Computing.

been satisfactorily explained yet.It should be obvious that any method

whereby we can reduce the network traffic with distributed interactive applications can provide considerable cost benefits.

If an application can be redesigned to use client/server technologies over the WAN, con­siderable communications economies can of­ten be achieved. Consider the case of a simple inquiry system where a six digit employee number is entered in order to retrieve an em­ployee record from the personnel files. There are really three different phases in an inquiry:1. Display a form on the screen, get the lookup

data from the user, and validate it.2. Look up the information in the data base

and retrieve the record.3. Display the retrieved information together

with appropriate field identification infor­mation.If the inquiry form takes 100 characters, and

the display form takes 500 characters, all just to display a 150 character record, it is easy to see where network bandwidth is used. If on the other hand a client/server approach is tak­en, a different picture emerges. A PC based client could display the inquiry form from the local disk. After the user has input the employ­ee number the PC would validate it. Only then would the client put the validated employee number into a protocol packet and send it to the server part of the application. The server now has a very simple job to do of just retriev­ing the matching employee record. The server does not need to know anything about how the record is to be displayed or anything. It just puts the record into a packet and sends it back to the client. The client retrieves a dis­play form from the local disk and displays it, and then fills in the fields with the informa­tion returned from the server.

There are a number of issues here. Firstly, note how the amount of information that tra­verses the WAN has been dramatically re­duced. Secondly, as all echoing of user input is done locally, and not across the network, the user does not see the network delay, and the user perception is of a very high speed system. It is interesting to note that investigations of user perceptions of performance have shown that users are very critical of even the slightest delay in the actual echoing of individual char­acters, but are very accepting of delays after they have hit ‘enter’. Character echo delays of even 200 milliseconds were perceived by users to be worse than an inquiry response time of 5 seconds.

Client/server applications can bundle up a whole inquiry into one packet before sending it to the server, and the response is returned as one concise packet which is then expanded for display. In this manner, client/server applica­tions can reduce wide area network loads dra­matically, with fivefold reductions in network traffic not being uncommon.

In order to more fully appreciate some of the efficiencies that can be achieved in prac­tice, two Australian examples will be dis­cussed. It is important to note that in each case the advantages of moving to client/server

architectures were realised in totally different areas. This underlines the need to study each application and the environment in which it will operate in order to be able to fully benefit from the gains which are available.

Both solutions are examples of distributed applications. The first was on a local high speed LAN where network performance was not an issue, but host performance was. The second was WAN based where network costs and network performance were very signifi­cant issues.

Case 1.The customer was a major financial trading

house. Money market figures were coming in constantly from all over the world. The trad­ers had to refer frequently to figures varying on a minute to minute basis. The information was kept on a VAX cluster of two VAX-8650 machines. Each had a 64 user licence. The users connected to the cluster using 700 PCs connected to an Ethernet and using a ‘SET- HOST’ terminal emulation program. As each VAX was only licensed for 64 users, only 128 PCs could be logged in at once, therefore the standard method of operation for each inquiry was for the trader to log into the VAX, do an inquiry, then immediately log out so another user could get in. This would become very frustrating during a busy day.

The main application was written in Cobol, but had been written very modularly. The ex­isting terminal front end module was replaced with a standard server base module which provided all the network connectivity and multi-threading capability. A corresponding PC client was created based on the existing terminal front end module so that the user interface remained the same, and a standard client base module provided all the network connectivity.

The original method of operation was based on time sharing, and an individual login and user process was required for each user. Not only was this financially expensive because VMS licensing is based on a ‘per user’ basis, but process context switching on a VAX/VMS systems is very machine expensive in terms of system resources because they are ‘heavy weight’ processes. The replacement server however was multi-threaded, and ran within the context of a single process, with up to 1024 clients being capable of concurrently accessing the server. Context switching within a single process is often just a case of changing a point­er and so is very efficient. Thus the system performance was improved dramatically. As well, only a single user slot was now being used on the VAX, so that instead of a license for 128 users, only a 2 user license was needed. In a test of 200 users making a simultaneous request, none reported even a 1 second delay, and most reported response as being instanta­neous. The improved system performance al­lowed the two node VAX-8650 cluster with 64 user licenses each to be replaced by a single VAX-3300 with a 2-user licence! Actual devel­opment of both the server and the client took 20 minutes based on existing code libraries!

28 PROFESSIONAL COMPUTING, NOVEMBER 1992

Register now for ITI ’93

23, £TLmhana9ement--------IZ^l~^^ement systems

I IfliPCMircosoft " -----------

SKKwaSjSi.’S's?® Ifefflp*/»We <I,u 2SIU‘«Wen .I bt* aitfUzrrnBkir ryHeftii. cB

ADA8AS^,V“, t*“",‘«!pr>s>rioml

' S“W 'f^ccAvcftiiQ-.w 1 *J’’b'4rat&ed Kv ik. -...* *Mt u> Ju™ t) 5?,^f,u *** 'tWm"hrt "Bh<a<i-*y *®*"*5*k aSSSOi7*^prov1** ww

nfSiJy Rdrt KLTr™"0"^02 953 9500 ’ N U,ral SaX NSW '

02 953 9S00,C''Neu,ra,BayNSlf

DOS 3.1,Microsoft Australia Phr 11 Skyline PI French cy 802 870 -;;Mfch Fora •cy'fscX lc**'"*

"» post: S«pft.r fi 1 ^,B'l9 "wwCenutt iftr

1,95 cWnivESSE

Sofhv°^ superior b/deri^[Or>9«: .«<«*•» or i»!„niheni, Jbc

SA

r1196 dataplex*PC*“0*" -v^opo^,

Coupon non

U0RJ cfl n V ,lpr loov conirftn ^'2 ©sen JEfl.f.'?"*0'1*** ceraoi?^

CuHinefI £*•* Cuftiort Soft^ .

T£s#Ec3i£,?,*?*™«?<'S;iK?ST|^slSK

Sft'SttSSS':

\ '»£*?''%" r"«'*o-uI i*-pWyE*S£ 1

I lucis «"*• ^CiiC'• IV*'. r*.!, L - - —« rnsrjotmfBrt"J,w, IDMVR™UpF5^n"^‘>« 4 dJS

dju *e^SIb£e*t* *

s,s.“.»««*«“S5r.n, „i ike iounr if s‘5*r*'« diu UemZfT* ““Of *11 d„/t“

~'^rfHr

From the publisher of Pacific Computer Weekly comes the most comprehensive index of software, hardware, communications and services — companies and products — yet produced for Australian users.The new IT Index plans to include every company operating in the Australian industry.How the new ITI will work:—You automatically receive one listing in the vertical market section of your choice (eg. accounting, mining, health, case tools, local area networking, wide area networking etc).This listing can then be augmented with further information and/or other listings in other vertical markets. You pay for these. Ensure your listing today. Reach the most powerful IT buyers in their preferred buyer’s guide.Fax the coupon below to ITI co-ordinator Christine Dixon, Pacific Computer Weekly.

Register for your listing in the COMPREHENSIVE ITI by completing the coupon.

For more information or to find out how to maximise your exposure contact Christine Dixon:

Melb 520 5670 other states

Free Call 008 335 196

Yes, I want to be in ITI ’93

Company Name.............................................

Contact Name..................................................First name Surname

Job Title...........................................................

Address.............................................................

Phone................................... Fax.....................

Vertical Market(s) you operate in............A reach a separate list if more space required PC

A.C.N. 004 260 020

Return coupon by Fax on (03) 510 3489 or enclose in an envelope and address: Reply Paid 1, ITI, Peter Isaacson Publications Pty Ltd, PO Box 172, Prahran, Vic. 3181

IF YOU'RE READING THIS

WE'VE REACHED OUR TARGETUT US HELP

YOU REACH YOURSThere’s too much at stake these days to take

chances with lists. Sophisticated marketers look for more than large data bases and scrubbed lists. They know that weeding out the old contacts is easy. The hard part is keeping up with the new contacts. That’s where PIPMAIL shines.

PIPMAIL is the direct marketing division of Peter Isaacson Publications. Its lists are derived largely from the vast publishing network of Peter Isaacson industry publications and yearbooks. These publications are sought by people in a

wide range of enterprises and professions.This is what makes PIPMAIL, updated daily,

the source of the most accurate and up-to-date lists available. With PIPMAIL you can have your list any way you want it — by profession, by industry, by postcode. Or you can have highly specialised targeting — PIPMAIL identifies literally hundreds of individual business activities.

Phone for details of segments from our database of 300,000 companies in the 10 main industry groups.

PIPMAIL accuracy through experienceTHE DIRECT MARKETING DIVISION OF PETER ISAACSON PUBLICAnONSPTYUg_

Call Linda Kavan or Jacqui Ratnavira on free call (Aust only) 008 335 196,Melbourne +61 3 520 5648/520 5620 or fax (03) 521 3647