10%32547698#:a@cbedgfh@ib'>kjbakunin.xs4all.nl/artikelen/open-source.pdf · 2004-04-23 ·...

22
([email protected]) This paper paper discusses the relative merits and demerits of Open Source software versus closed source proprietary software for the enterprise, viewed as a corporate user, not as making revenue from selling or servicing computers or software. In order to present prac- tical value rather than abstract principles, it focuses on Microsoft and Intel as champion of the proprietary world versus Linux and the GNU project as champions of open source, with assorted bits from the history of computing to illustrate some principles. At the con- clusion, cost factors will be an important measure. I may have some experience using Open Source software, but I am hardly a scholar on the subject like Eric S. Raymond. I do not claim this paper to be entirely objective and unbi- ased, but the same thing goes for most other publications on the subject. Before complain- ing about the lack of hard numbers, please read the classic book ‘How to lie with statistics’. The author reserves copyright. Permission is granted to make and distribute verbatim copies of this document, convert into different data formats, and print it without fee. In the sixties IBM introduced its System/360, which for the first time was not a single model, but an entire family of compatible machines, running OS/360. Its descendants are still in use today as System/Z and OS/Z. Compatibility means first that all members of the family have the same Instruction Set Architecture, so programs in binary machine code can be run on all members running the same Operating System. Compatibility means second that all the hardware interfaces (con- nectors etc.) in the computer are standardised, so various components can be replaced by cheaper parts from different manufacturers. Of course, in real life compatibility is limited. We often encounter backward compatibility, which means that a hardware or software product may replace (and presumably improve upon) an older version, but not the other way around. Compatibility helps to preserve a purchaser’s investment in software and to a lesser extent also in hardware.

Upload: others

Post on 17-Mar-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

� ����� ������ �� ���������� �������������������

�! #"%$'&)(+*-,/.10%"32547698#:<;([email protected])

=?>A@CBEDGFH@IB'>KJ

This paper paper discusses the relative merits and demerits of Open Source software versusclosed source proprietary software for the enterprise, viewed as a corporate user, not asmaking revenue from selling or servicing computers or software. In order to present prac-tical value rather than abstract principles, it focuses on Microsoft and Intel as champion ofthe proprietary world versus Linux and the GNU project as champions of open source,with assorted bits from the history of computing to illustrate some principles. At the con-clusion, cost factors will be an important measure.

I may have some experience using Open Source software, but I am hardly a scholar on thesubject like Eric S. Raymond. I do not claim this paper to be entirely objective and unbi-ased, but the same thing goes for most other publications on the subject. Before complain-ing about the lack of hard numbers, please read the classic book ‘How to lie with statistics’.

The author reserves copyright. Permission is granted to make and distribute verbatim copiesof this document, convert into different data formats, and print it without fee.

L >KMONPFQ@RBTSUBWVXBY@[Z

In the sixties IBM introduced its System/360, which for the first time was not a singlemodel, but an entire family of compatible machines, running OS/360. Its descendants arestill in use today as System/Z and OS/Z.

Compatibility means first that all members of the family have the same Instruction SetArchitecture, so programs in binary machine code can be run on all members running thesame Operating System. Compatibility means second that all the hardware interfaces (con-nectors etc.) in the computer are standardised, so various components can be replaced bycheaper parts from different manufacturers.

Of course, in real life compatibility is limited. We often encounter backward compatibility,which means that a hardware or software product may replace (and presumably improveupon) an older version, but not the other way around. Compatibility helps to preserve apurchaser’s investment in software and to a lesser extent also in hardware.

When the hard- and software architecture of a computer family is turned into publishedstandards, other vendors are likely to produce compatible hardware, driving the pricedown.

�������������

It used to be said that “nobody ever got fired for buying IBM”. You could buy all your hard-ware, software, and support from Big Blue. That saying is still heard today with the name ofMicrosoft replacing IBM. The US government has in vain tried to use anti-trust laws tobreak up both firms at one time.

The effect of vendor lock-in may be illustrated by the following quote from the german c´tmagazine (nr. 21/2003, p.126, my translation).

“But that’s not enough: a large part of the new functions [of Office 2003] necessary for co-operative creation of documents require the new Windows Server 2003 and SharePointPortal Server. A small example: a company with 25 Office workstations wishing to switchto the Professional Edition, will pay 10,000 Euro for the update. Add to this the 2,400 Eurofor the Windows 2003 Enterprise Server and 7,050 Euro for the SharePoint Server. If theseworkstations are not running Windows XP yet, this incurs another 3,250 Euro (25 × 130)—atremendous deal for Microsoft.”

Proponents of the Windows platform argue that if everyone uses the same products, ex-penses due to incompatibilities will be minimised. As a general trend market leaders oftentry to make it harder for their competitors to take away their market share with compatibleproducts. It may be easier for small businesses without an IT staff of their own to use onlyMicrosoft products, but enterprises need a heterogeneous infrastructure to meet theirdiverse needs. Microsoft may deliberately break backward compatibility to discouragetheir customers from sticking to their old products.

A related trend, called the network effect, says that the utility value of a product increaseswith the number of users. This is especially true for the computer industry where the econ-omies of scale tend to be huge. It has become quite hard for newcomers like BeOS to gain afoothold in the market.

Another argument in favour of accepting monopolies is based on a religious or politicalprinciple of equity: a company that seeks to justify its attempts at monopolising its marketshould not complain when its suppliers overcharge them, but transfer the burden onto itscustomers.

However, the rule that monopolies are conducive to excessive prices and stifle innovationare vindicated in practice, as exemplified by the prices of Windows and Office. Microsofthas been known on several occasions to lower its bid where a large prospective customerwas seriously contemplating a switch to Linux. Thus you could consider Linux in order toobtain better prices from Microsoft.

������������� �� ��������������

In 1971 Intel introduced the first commercial microprocessor, the 4-bit 4004 chip. It was in-tended for digital instruments, pocket calculators, wrist watches and the like, but not forbuilding computers. A popular subculture grew up of electronics enthusiasts using micro-processors to build their own home computers, soon leading to companies like Heathkitselling easy-to-assemble microcomputer kits.

In a meeting of the Homebrew Computer Club, a young bespectacled man called Bill Gatesdelivered a speech arguing his fellows to sell their software rather than share it freely. Withhis companions Paul Allen and Steve Ballmer he later started Microsoft Corporation andproceeded to sell an interpreter, which gave the Basic programming language new popul-arity, being well adapted to the limited capabilities of the micros of the time.

When Digital Research created the simple CP/M Operating System for the 8-bit Intel 8080microprocessor, it was quickly adopted by a number of computer builders, including theDutch Philips company. With programs like VisiCalc, dBASE II, and WordStar they slowlymade their way into the office.

The gold rush days of the computer industry started when IBM hastily developed its ModelPC, based on Intel’s 8088 microprocessor (an 16-bit extension of the 8080 and a variant ofthe 8086). IBM approached Microsoft seeking to include their Basic interpreter. IBM wasthen directed to Digital Research for their CP/M and after failing to reach a deal, returnedto Microsoft. The startup bought up a little known OS compatible with CP/M, and after alittle fixing sold it in 1981 as PC-DOS 1.0.

The IBM PC was built from standard third-party components and IBM standardised andpublished the hardware design. As a consequence, numerous vendors started selling add-on products. Compaq (eventually merged with Hewlett-Packard) became the leader of thePC-clone market.

The deal with IBM, which included royalties for every copy of PC-DOS (and Basic) sold,turned Bill Gates into a billionaire. He sold it to PC-compatible manufacturers under theMS-DOS brand name. While DOS remained small and limited, it imitated the most usefulfeatures of the Unix OS without its multi-tasking capabilities.

The PC turned into a mixed blessing for IBM, killing off its typewriter division and hurtingits mainframe market. IBM and Microsoft split up over the development of OS/2, leavingIBM to finish the Operating System for its Personal System/2 family, to succeed the PC, PC-junior, PC-XT, PC-AT and PC-RT. The new family was based on the MCA bus (and the Intel80286 and 80386 microprocessors).

IBM was determined not to give away the fruits of its labours again, but by now the com-petition simply continued the old PC family, standardising EISA and VESA local buses tosupplant the ISA bus of the PC-AT, until an Intel-led consortium introduced the PCI bus.IBM eventually abandoned its PS/2 line to rejoin the fray. Version 3.0 of OS/2 was extendedto 32 bits and included a Graphical User Interface; sold on the open market it grew in po-pularity until Microsoft conquered it with the inferior Windows ´95 product.

In 1984 Apple Computer launched the MacIntosh as a cheaper compact alternative for thefailed Lisa and Apple /// models. The Mac introduced the concepts of the Graphical UserInterface to the masses, but for home use the cheaper Atari ST and Commodore Amiga

were more powerful. The home computers in time evolved into game consoles from Sega,Nintendo, Sony, and today the Microsoft X-Box.

At a time when the market for MS-DOS applications was dominated by products like theWordPerfect 5.1 word processor and the Lotus-1-2-3 spreadsheet, Microsoft gained signi-ficant market share with its Word and Excel on the MacIntosh. After the divorce from IBMit developed its OS/2 code base into the Windows product and then ported Word and Excelto that. gradually turning the market from a jungle into a monoculture and slowing downinnovation.

As PCs proliferated in the office, they started getting connected by Local Area Networks.Novell Netware became the dominant Operating System, offering shared file storage andprinting to DOS PCs. Microsoft retaliated with LAN Manager and failed, did not have muchsuccess with Windows for Workgroups, but in the long run conquered with Windows NTServer, taking until Windows 2000 before reaching maturity.

The 80386 32-bit microprocessor from Intel contained all the hardware support needed fora multi-user Operating System, and SCO Unix, descended from Microsoft Xenix, ran on it.Microsoft could not stay behind and decided to write a real Operating System, for whichthey formed a team led by Dave Cutler, who had worked on Digital Equipment’s VMS. Itincorporated a range of novel features like a microkernel, but the need for backward com-patibility with Windows 3.1 delayed its completion.

When it was finally available, it achieved but little following, being expensive and resource-hungry. Its stability was a great improvement over the dreaded ‘blue screens’ associatedwith Windows. Microsoft produced a quick fix in the form of Windows ´95 (a.k.a. Windows4.0), consisting of a 32-bit API on top of a 16-bit Windows layer on top of MS-DOS 7.0. Onlywith the introduction of Windows XP (a.k.a. Windows NT 5.1) in 2001 was Microsoft finallyready for the desktop.

The PC established the huge market due in part to the IBM brand name that Intel andMicrosoft gradually dominated. Microsoft has ever used its dominance on the desktop OSto push its conquest of new markets like server operating systems. As Intel-based hardwaregrew more powerful, Windows moved from small workgroup servers into the corporateworld, but at no time did it come close to the kind of monopoly seen on the desktop. Onlyat the enterprise level does the reign of the mainframe remain unchallenged.

Intel complemented Microsoft with its dynasty of 4-bit 4004, 8-bit 8008, 8080, 8/16 bit8086, 80186, 16/32 bit 80286, 32 bit (80)386, (80)486, Pentium, Pentium Pro, Pentium II,Pentium III, Pentium IV, Pentium V, 64-bit Itanium, and Itanium II microprocessors. Apartfrom a few slip-ups which allowed AMD to temporarily snatch away some market share, ithas been leading the chip industry (in dollars, not in number of chips sold).

Intel enjoys the economies of scale, needing to invest billions into R & D to design the nextchip generations and build the fabs it needs to stay ahead of the game. This makes its competi-tors’ products either slower or more expensive. While Intel has maintained backward com-patibility with its primitive first microprocessors, the others have used more efficient CISCand later RISC designs. The segmented addressing that Intel introduced in the i8086 to cre-ate a 20-bit address space with 16-bit address registers represents an unfortunate designdecision. Maintaining backwards compatibility drives up costs and slows down innovation.

The Itanium processor, which was scheduled for 1997 and delivered in 2001, is probablythe biggest step. It uses an unproven VLIW design, for which special compilers are needed.

It maintains compatibility to its ancestors by emulating the x86 and PA-RISC instructionsets, but this does not yield top performance. Intel has also produced the innovative andfailed i432 chip set and the i860 and i960 RISC processors, and is successful with the ener-gy-efficient XScale family, which it acquired from the English ARM company, none ofwhich are compatible with x86.

Some observers have placed their bets on AMD’s IA86-64 range, which is a more straight-forward extension of Intel’s 32-bit architecture. Another competitor is Transmeta, whoseCrusoe chip uses just-in-time compilation to emulate the x86 instruction set. This costsless performance than you would expect, and the part consumes very little power.

Moore’s law, a rule of thumb coined by Intel founding father Gordon Moore, states that thenumber of transistors on an (Intel processor) chip doubles every 18 months. While the billi-on-transistor chip is not far off, the laws of physics will eventually halt the progress. Thecomputers of the future will probably be built from optical rather than electronic switches.

Another source quotes a similar tenfold increase in arithmetic speed every five years forthe fastest computer of its time; this progress is only maintained by moving from 1 CPU in1950 to around 1000 CPUs in 2000. The clock frequency increased about 1000 times in 25years. With the number of transistors, the power consumption has also increased to 130 Won the Itanium II, making cooling requirements a major obstacle to speed increase.

������������ ������� ������������

We have mentioned a few families of compatible computers: IBM mainframes runningMVS, Intel PCs running DOS and Windows, Apple MacIntoshes running MacOS (note thatthe MacIntosh brand includes a few incompatible families), Digital VAXen running VMS,but the number of families is now much smaller than thirty years ago, with e.g. Philipscomputers relegated to the history books. This is due to the increasing cost of producingsoftware programs and hardware components for many architectures.

Early nineteenth century, Charles Babbage designed the analytical engine, the first digitalcomputer. Never completed, the computer vanished into obscurity until Konrad Zuse builtthe first working computer in the thirties of the twentieth century from electro-mechanicalrelays; since the Hitler government did not advertise it much, some Americans maintainthat the honour belongs to Howard H. Aiken’s Mark I.

Computers of the first generation (ca. 1945 – 1955) were as large a room, very expensive,but slow and had very little main memory. They used vacuum tubes to perform arithmetic,paper type for input and teletypes for output. Programs were hard-wired.

Computers of the second generation (ca. 1955 – 1965) were built from transistors, whichwere cheap, small, reliable and consumed less energy, and used ferrite rings (core) for mainmemory. They were mostly programmed in assembly language, but their were simple high-level programming languages like Fortran, Basic, Lisp, and Algol 60. With magnetic tapesand later disks the first Operating Systems appeared, which accepted jobs in the form bat-ches of punch cards. No longer did the programmer have the computer at his own disposalfor a few hours, but the mainframe held jobs in a queue and executed tasks for whichhardware resources were available.

Computers of the third generation (ca. 1965 – 1980) were built from Integrated Circuits,containing many transistors on a silicon chip. Main memories grew to a megabyte of corememory, as RAM chips appeared. Multi-tasking Operating Systems like OS/360 appearedwhere many users could run programs virtually simultaneously from video terminals.High-level programming languages like Cobol were in widespread use.

Computers of the fourth generation (ca. 1980 – 1990) used ICs with thousands of transistorshaving multiple CPUs. Mainframes with high I/O bandwidth were used for commercialapplications and supercomputers with high floating-point arithmetic speed were used fortechnical and scientific applications. A CPU on a single chip is called a microprocessor,used in lower-end machines termed microcomputers. which could be used for light admin-istrative applications or as terminals, often with Graphical User Interfaces. Computer net-works became commonplace, but Distributed Operating Systems remained a research sub-ject. Relational Databases were common in the commercial world. Optical disks couldstore about a gigabyte of data while magnetic disk capacities were measured in megabytes.

The fifth computer generation (ca. 1985) was characterised by ICs with millions of transis-tors built from superconducting devices, massively parallel computers (thousands of pro-cessors), very high-level programming languages based on Prolog, Artificial Intelligencemorphed applications into Expert Systems. The development of fifth generation computerswas spearheaded by Japan and the European Community to challenge the leadership of theUnited States. The start of the project was highly publicised, its end went by unnoticed. Theamount of time and money spent may have been less than on Windows 2000 and the Itanium.

At the outset of the third millennium ICs have hundreds of millions of transistors and su-percomputers a thousand CPUs have replaced vector processors with few CPUs like theCray-1. The NEC Earth Simulator is an exception with its 5120 vector processors on a 500MHz clock, delivering a total of 41 TFLOPS. Its main memory of 10 TB is distributed acrossthe 640 nodes of CPUs each, with a 640 × 640 crossbar switch delivering 16 GB/s of band-width between nodes. With each node consuming 20 kVA, it contributes to the globalwarming it was designed to model. The machine fills a four-storey building of 65 × 50 m.

A desktop computer might have a single CPU with 1 GFLOPS and on the order of 1 GB ofRAM. Magnetic disk drives had grown to 100 GB, whereas optical disks reached 10 GB. Itsvideo card had an 3-D rendering engine faster than the CPU and 64 MB of RAM. With a 133MB/s PCI bus, its I/O bandwidth lagged well behind the mainframe world. It sported a 1Gb/s (ca. 100 MB/s) Ethernet connection. Access to the Internet (former Arpanet) would bea 1 Mb/s connection, offering primitive smtp, ftp, http, and ssh (encrypted telnet) proto-cols. With limited vector processing appearing on the desktop, it was basically a fasterfourth generation system. Desktop OSes had become similar to their mainframe counter-parts.

Luxury automobiles jam-packed with sensors, actuators, cabling and processors totalledmore computing power than desktops. Similar in form and function, pocket computersand cellular telephones had become the battlefield between the computer and telephoneindustries. Home computers turned into digital television sets.

Artificial Intelligence had produced computers beating chess grandmasters, but failed atpractical tasks. Speech and handwriting recognition were being deployed only in very lim-ited application areas. The number of programming languages in actual use was declining,with Sun Java and Microsoft C# vying for first place. Programs for the Earth Simulator andits ilk were written carefully hand-parallelised Fortran and C dialects.

A related taxonomy for IC technology distinguishes among:� Small Scale Integration (1 – 10 logical gates, where a gate may consist of two transistors)� Medium Scale Integration (10 – 100 gates)� Large Scale Integration (100 – 100,000 gates)� Very Large Scale Integration (more than 100,000 gates)

Like the subdivision into generations, this taxonomy is getting out of use. As mainframesgrew more powerful, but remained big and expensive and had to be shared, engineersstarted looking for a computer cheap enough to operate on their own. In 1961 Digital Elec-tronics Corporation produced the first minicomputer, (a typical mini would be the size of arefrigerator) the 18-bit PDP-1. Its $ 120,000 price tag made it affordable for medium-sizedcompanies and minis sold like hotcakes. After a number of incompatible successors, the16-bit PDP-11 became a very successful family of compatible machines, some still in usetoday.

At the end of the seventies DEC introduced the 32-bit VAX-11 family, a CISC design like thePDPs, with VMS as a mature multi-user Operating System. A distinguished feature was theability to join multiple Vaxen into a cluster. With shared disks and special interconnects,the cluster presented to the user the image of a single system, so that she would not needto know which device stored her files or which computer ran her programs: the goal thatDistributed Operating Systems are striving for.

In 1992 Digital introduced the Alpha AXP, a family of 64-bit RISC microprocessors, whichwere used in desktops, servers and minisupers, running OpenVMS, Digital Unix, or Wind-ows NT. This indicates that the micro-mini-mainframe subdivision is no longer very use-ful.

The Dutch computer industry was represented by Electrologica N.V., which was acquiredaround 1970 to become the Philips Minicomputer division, before being taken over by Di-gital in the eighties and being terminated. Digital itself was bought by Compaq in the nine-ties, which recently merged with Hewlett-Packard, leading to a premature end of the AlphaAXP family.

As minicomputers became more powerful multi-user systems, the microprocessor enabledthe microcomputer, which had the size of a typewriter and a price tag that put it withinreach of the individual. Once made respectable by IBM, companies started selling millionsof machines instead of thousands.

Apart from their size and price, these micros resembled early mainframes (and early minis)a lot. It is said that history repeats. While high-end minis like the Sun 15K may be consid-ered mainframes, today’s micros take the roles of workstation and server that were theprovince of minis around 1990. Today, cell phones and PDAs have the capabilities found inmicros around 1980 at a lower price tag and conveniently fit into a pocket. The bottom ofthe hierarchy is formed by the chip card with trivial price and power consumption, sup-planting passive cards with only a strip of magnetic material to store a few bits of infor-mation.

����������� �� ����������������

At a time when IBM barely managed to stabilise the software for its System/360 line,Massachussetts Institute of Technology, Bell Labs (A T & T’s research division), and GeneralElectric collaborated on the ambitious MULTICS Operating System. While not the firstmulti-user multi-tasking OS, one of its novel features was that it was written in a high-levelprogramming language called PL/I. MULTICS development took many years before beingusable for production work. Only a small number of systems were sold and GE quit thecomputer business.

After Bell Labs withdrew from the project, one of its researchers, Ken Thompson, started asimple Operating System on the small 16-bit PDP-7 mini, incorporating features fromMULTICS.

He was joined by Brian Kernighan and Dennis Ritchie, who created the C programminglanguage and rewrite most of the kernel of Unics (renamed to Unix) in it. C borrowed someideas from Algol 68 and Pascal, but was simpler and the decision to move all I/O from thecompiler into the standard library made it suitable for OS development; Niklaus Wirth tookthe same direction with his Oberon language.

Unix was successful because it was organised around a few features. The kernel containsthe device drivers, interrupt handlers, memory manager and task scheduler. The rest of thesystem consists of programs running in user space that communicate with the kernelthrough a few system calls. Most programs use the richer Application Programming Inter-faces exported by the standard libraries.

The best known Unix program is the shell, which interprets the user’s commands. Havinglittle built-in functionality, the shell executes many small utility programs like ls, passwd,cp, mv, etc.

Unix became famous for its ‘everything is a file’ paradigm: disks and tapes are representedby special file types, just like terminals, printers, network sockets, main memory, pipes anddirectories. This paradigm is adhered to more strictly in successors like Plan 9. The file is asimple concept: an ordered stream of bytes without structure, supporting open, close,seek, read and write functions.

Unix started out simple. A trio quickly created a working version on the small memory of aPDP-7. It was quickly ported to the PDP-11, and with more effort, to the Interdata 8/32,VAX and many more.

A T & T held a monopoly over the U.S. telephone market, which did not permit it to sellcomputers, so Bell Labs licensed Unix as source code for a modest fee to universities ar-ound the globe, where it quickly became popular. The University of California at Berkeleycreated the BSD Unix distribution, from which many improvements merged back into A T& T’s System V.

After the U.S. government broke up A T & T (which they failed to do with IBM and Micro-soft) A T & T closed the System V source and raised license fees. Computer manufacturerssold enhanced versions in binary form only, but the BSD source code remained open.

The advantage of using Unix is that programs are portable across different computer fam-ilies in source code form. Compiling the source produces executable programs, which are

only compatible across computers with the same processor family. Now customers can av-oid vendor lock-in; Unix computers often had better price-performance than proprietarysystems (before PC hardware became powerful enough to compete).

The reality is not quite so rosy: the various Unix versions have continued to drift apart tothe point where you may ask if IBM AIX, SCO UnixWare, Sun Solaris, HP-UX and SGI Irixare versions of Unix or merely children of Unix. Linux and FreeBSD are about as compat-ible as the true-blue Unices to one another. The fragmentation of the Unix market becameits biggest drawback, mitigated in time as few Unix vendors survived.

To improve the situation, representatives from the various parties convened under theauspices of the IEEE Standards Board and drafted the POSIX 1003 standard, which waswidely adopted. This standard represents the least common denominator, rather than anexcessive list of features, and conforming implementations are called Open Systems. Thelist includes OpenVMS, OpenVMS and even Windows NT, none of which are particularlyUnix-like.

Another important set of standards is the TCP/IP networking protocol suite. Outlined bythe ever growing list of Internet Requests For Comment, published by the Internet Engin-eering Task Force, it forms the basis of most of today’s local area networks and internet-works. Not all RFCs have the status of a standard. The informal TCP/IP protocols have pre-vailed over the complex OSI network protocols, defined by bureaucratic governmentalstandards bodies.

The de facto standard for TCP/IP implementations is the one from the Computer SystemsResearch Group at the University of California at Berkeley, which is distributed as OpenSource with BSD Unix. The Microsoft TCP/IP stack is also derived from this code.

������������ ��������������������������� ��!"��$#$��%&�

(This section is borrowed from David A. Wheeler)

In the early days of computing (approximately 1945 to 1975), computer programs were oft-en shared among developers, just as OSS/FS practitioners do now. An important devel-opment in this period was the ARPAnet (which later evolved into the Internet). Anothercritical development was the operating system Unix, developed by A T & T researchers, anddistributed as source code (with modification rights) for a nominal fee. Indeed, the inter-faces for Unix eventually became the basis of the POSIX suite of standards. However, asyears progressed, and especially in the 1970s and 1980s, software developers increasinglyclosed off their software source from users. This included the Unix system itself; many hadgrown accustomed to the freedom of having the Unix source code, but A T & T suddenly in-creased fees and limited distribution, making it impossible for many users to change thesoftware they used and share those modifications with others.

Richard Stallman, a researcher at the MIT Artificial Intelligence Lab, found this closing ofsoftware source code intolerable. In 1984 he started the GNU project to develop a completeUnix-like operating system that would be free software (free as in free speech, not freebeer). In 1985, Stallman established the Free Software Foundation to work to preserve,protect, and promote Free Software; the FSF then became the primary organisational

sponsor of the GNU project. The GNU project developed many important software pro-grams, including the GNU C compiler and the EMACS text editor. A major legal innovationby Stallman was the GNU General Public License, a widely popular free software license.

However, the GNU project was stymied in its efforts to develop the kernel of the operatingsystem. The GNU project was following the advice of academics to use a microkernel ar-chitecture, and was finding it difficult to develop a strong kernel using such an architecture,which they needed to complete the GNU Hurd Operating System.

Meanwhile, the University of California at Berkeley had had a long relationship with A T &T’s Unix operating system, and Berkeley had ended up rewriting many Unix components.Keith Bostic solicited many people to rewrite the remaining key utilities from scratch, andeventually managed to create a nearly complete system whose source code could freely bereleased to the public without restriction. The omissions were quickly filled, and soon anumber of operating systems were developed based on this effort. Unfortunately, these op-erating systems were held under a cloud of concern from lawsuits and counterlawsuits fora number of years.

Another issue was that since the BSD licenses permitted companies to take the code andmake it proprietary, companies such as Sun and BSDI did so—continuously siphoningdevelopers from the openly sharable code, and often not contributing back to the publiclyavailable code. Finally, the projects that developed these operating systems tended to besmall groups of people who gained a reputation of rarely accepting the contributions byothers (this reputation is unfair, but nevertheless the perception did become widespread).The descendants of this effort include the capable operating systems NetBSD, OpenBSD,and FreeBSD. However, while they are both used and respected, and proprietary variants ofthese (such as Apple OS X) are thriving, another OSS/FS effort quickly gained the limelightand much more market share.

In 1991, Linus Torvalds began developing a small operating system called Linux, at first pri-marily for learning about the Intel 80386 chip. Unlike the BSD efforts, Torvalds eventuallysettled on the GPL license, which forced competing parties working on the kernel code towork together. Advocates of the *BSDs dispute that this is an advantage, but even today,major Linux distributions hire key kernel developers to work together on common code, incontrast to their BSD counterparts which often do not share their improvements. Torvaldsmade a number of design decisions that proved to be remarkably wise: using a traditionalmonolithic kernel design, initially focusing on the Intel x86 architecture, working to im-plement features like dual-booting requested by users, and supporting hardware that wastechnically poor but widely used.

And finally, Torvalds stumbled upon a development process rather different from tradition-al approaches by exploiting the Internet. He publicly released new versions extremely often(sometimes more than once a day, allowing quick identification when regressions occurr-ed), and he quickly delegated ares to large group of developers (instead of sticking to a verysmall number of developers). Instead of depending on rigid standards, rapid feedback insmall increments and darwinian competition were used to increase quality.

When the Linux kernel was combined with the already developed GNU toolchain and com-ponents from other places (like the BSD code base), the resulting operating system wassurprisingly stable and capable. The FSF insists on the term the ‘GNU/Linux’ system be-cause Linux is merely a kernel.

In 1996, Eric Raymond realised that Torvalds had created a whole new method of devel-opment that combines the sharing inherent in free/open source software with the speed ofthe Internet. His essay ‘The Cathedral and the Bazaar’ describes the method for othergroups to emulate. The essay was highly influential and in particular convinced Netscapeto switch to a free/open source approach for its next generation web browser (which even-tually resulted in Mozilla/Netscape 6).

In the spring of 1997, a group of leaders in the free software community gathered, includ-ing Eric S. Raymond, Tim O´Reilly, and Larry Wall. They were concerned that the term ‘freesoftware’ was too confusing and unhelpful; the group coined the term ‘open source’ as analternative and Bruce Perens developed the initial version of ‘the Open Source Definition’(see below) containing criteria for what license model should qualify as open source. Theterm Open Source became widely used, but the head of the FSF Richard Stallman kept re-sisting it and Bruce Perens reverted to the term free software feeling the emphasis shouldbe on user freedom.

Major Unix server applications like the Apache web server were quickly ported to GNU/Linux and the *BSD systems, which are based on open standards like POSIX and TCP/IP,which helped to establish their reputation as cheap servers.

After 2000, the use of Linux on the desktop slowly grew, being hampered by the relativedearth of applications. The growing Open Source lobby targeted government bodies andeducational institutions on the grounds that their public function should not be limited toa single software platform, but on open standards. They were met with more success in theThird World and China than in the Europe and the United States.

������������ ������

The historical effect has been that Open Systems have driven most proprietary systemsfrom the minicomputer market, while Microsoft established a monoculture on the desk-top.

By the end of the nineties, Windows NT had become powerful enough to compete againstgraphical Unix workstations and small Unix file servers. The old minicomputer vendors re-treated to large servers. When Intel announced a 64-bit processor line, and Microsoftpromised it would run Windows, the feeling was that Microsoft would establish its hege-mony across the enterprise.

When the much-delayed Itanium finally arrived in 2001, its performance was below con-temporary Pentium IV processors, and barely a chip was sold. With 64-bit Windows XPdelayed as well, the part was nicknamed Itanic in the press. Worse, AMD’s foray into 64-bithood received favourable quotes. The economic downturn certainly did not help. In thepolitical arena, the new Bush government saved Microsoft from break-up.

The Unix family has relegated many proprietary operating systems like AOS, BOS, COS,DOS, TOS, VOS, OS/2, OS/9, OS/400, MPE, and VMS into the dustbin of history before Mi-crosoft NT Server, *BSD, and Linux started to take away market share, especially hurtingSCO, that used to lead the Unix-on-Intel market. Some observers hold that Linux actuallybenefits vendors like Sun. Since the SCO Group, that currently ‘owns’ the A T & T Unix

source code base no longer seems to be pushing development, there no longer is such athing as the Unix™ Operating System, but the future of its Open Source relatives appearsvery bright.

�����‘ ��� � ���� ���������� ’ � �������������

Microsoft’s virtual monopoly is not due to the quality of its software. It may have wonsome battles by undercutting competitor’s prices, but its flagship products Windows andOffice are greatly overpriced. They excel in their number of handy features, few of whichare unique. A flashy look does not make a program user-friendly; Apple has paid more con-sistent attention to usability and simplicity. Open Source software is not necessarily anybetter in this regard.

While Microsoft boasts the ‘freedom to innovate’, the practice tells a different story. Itmostly adds new products to its portfolio by acquisition. For example take the way the de-velopment of the Internet Explorer stagnated after Netscape was safely defeated.

Windows’s stability problems are slowly improving. Microsoft claims many crashes are dueto bugs in device drivers, which are often written by hardware manufacturers. Crappycheap hardware and power fluctuations all contribute to the problem. Part of the problemis the inability to have multiple versions of a Dynamic Link Library coupled to the ability ofapplications to replace system DLLs (popularly referred to as ‘dll hell’).

MS-DOS was a successful Operating System and a good choice for a PC in 1981, but con-servative at the time. Windows started out as a graphics library for running Word and Excelon DOS, and turned into a poor man’s MacIntosh or a stripped-down version of OS/2. LANManager was a poor challenge for Novell. Windows NT is in many respects a modern multi-tasking (since Win2K also multi-user) micro-kernel Operating System, but had to be com-patible with Windows 3.1, thus inheriting features that were not appropriate for it

By comparison, the IBM mainframes were far more mature to start with, so there was lessof a need for change. The legacy of twenty-five year old software is becoming a seriousproblem: very costly to maintain and nearly impossible to port to modern platforms. Thecreators of Unix were either very wise or very lucky, as their design gradually evolved intothe modern operating systems of thirty years hence.

Windows has far more security problems than other Operating Systems; it is the only rem-aining OS susceptible to infections by worms, trojan horses and virii. Programs like Inter-net explorer, Office, and Outlook have easy-to-use but dangerous features like automati-cally executed scripts, macros, and Active X objects embedded in documents. Security isnot a feature that can be added on without changing important parts of the Windowsdesign.

Microsoft’s Next Generation Secure Computing Base initiative addresses these issues byintroducing a secure nexus into the PC hardware (in cooperation with Intel) and kernel. Itmay be overkill, it may not achieve the desired effect, it may help Hollywood to protecttheir Intellectual Property from illegal copying (Digital Rights Management) to the dismayof the user.

There have been several studies comparing the Total Cost of Ownership of Windows versusLinux, which turned up quite divergent results. In some cases, it turned out that studiesfavouring Windows had been funded by Microsoft, Inc. In reality, it is hard to determinethe total cost of a company’s computing facilities, and the benefits are even more nebul-ous; still Open Source almost always turns out cheaper. As Microsoft in september 2002introduced a more expensive licensing scheme for companies their customers began swit-ching to Open Source.

According to most studies the TCO is dominated by system administration and support.For very large organisations and in developing countries the cost of software licenses canbe decisive, so much that the Chinese government can save money by developing its ownversion of Windows (and its own version of Linux as well). Additionally, there is the time-consuming administration of all those software licenses, needed when the BSA comes raid-ing your premises at gunpoint. With Linux, you tend to need fewer computers with lessexpensive hardware.

We may assume end-user training costs and helpdesk needs to be similar (not counting therather high transition costs), leaving system administration as the most disputed area (fordeveloping countries personnel costs will be far lower).

It is said that Linux (and Unix generally) are more difficult and require highly skilled, andtherefore more expensive administrators. Installation and configuration of Unix is seen asmore difficult and time-consuming than installing a Windows PC; in part this is because aserver is inherently more complex than a desktop. For Windows you get shrink-wrappedinstallation programs, compared to hand-edited configuration files for Unix.

If Unix systems take time to install and configure properly, my experience is that runningsystems require less attention compared to Windows boxes, as strikingly confirmed by avisit to Sun Microsystems where a small team in Amersfoort, the Netherlands handles sys-tem administration for 10,000 Solaris desktops all over Europe. System administration re-mains a bigger challenge for the near future than hardware or application software; Sun,CA, HP, IBM and Novell seem to lead this area with Apple, Microsoft, and the Open Sourceworld lagging well behind.

To be effective, Unix should be used in a different way than Windows; a Windows networktypically consists of a large number of servers, file servers, printer servers, domain con-trollers (logon servers), e-mail servers, web servers, Internet proxy servers, database ser-vers, etc., whereas Unix networks consist of a few big servers running all these services.Since IBM has been offering Linux on its mainframe range, their market share has beengrowing. The ability to allocate hardware resources to shifting workloads is a good sellingpoint. For an Enterprise, a million dollar mainframe can have a lower TCO than a farm ofPCs. Open Source means you have to buy your support separately, but since you have sev-eral choices, costs tend to be lower.

The prime advantage of the PC on every desktop is the availability of more computingpower at the user’s fingertips. With the proliferation of Graphical User Interfaces, it has be-come very hard for a single machine to fluidly serve a hundred X terminals. The downsideis that you now need the power of a mainframe on every desk. When computers wereexpensive, operators made efforts to keep it busy; CPUs in desktop PC may be idle for up to99.99 % of the time as subsecond response times are the performance goal.

With 100 Watt desktop processors due to arrive soon, electricity consumed by computers isa growing global problem. Manufacturers are just concerned about keeping all that energyfrom melting the chip.

In the course of more than twenty years we have been witnessing innovation at an aston-ishing rate from PCs capable of handling basic office tasks to PCs capable of handingtoday’s basic office tasks, without changing prices much. In order to keep up with inno-vation, customers had to replace their PC park every three years, compared to ten years forbigger iron. The raw materials and energy needed to produce all these PCs are anotherglobal problem, to which all the game consoles, mobile phones and various smart appli-ances merrily contribute.

The progress of hardware has been driven both by the exponential growth in the amount ofdata stored and processed, but also by the growing resource consumption of software. Ourproductivity must increase steadily order to maintain our current level of affluence evenwithout any population growth. Although the late Professor Edsger W. Dijkstra claimed thatsoftware does neither rust nor wear out, PC hardware could last five or more years, but thesoftware is ever being replaced, in contrast to twenty year or older mainframe software thathas to adapted now and then to reflect changing requirements. Linux is good for runningold Unix applications on.

The second biggest advantage of the personal computer was to wrestle control the com-puter from IT department into the user’s own hands. The users had to wrestle the DOScommand prompt until given the illusion that Windows would be more user-friendly. Aftercorporate networks made sharing information easier, the system administrators conquer-ed the file servers, and eventual regained complete control over the desktops. The down-side is that administration of hundreds of systems, each a little different, is far more la-bour-intensive than running a mammoth-sized mainframe.

Thus, the best way to saving money is not just to replace closed-source software with op-en-source analogues, but to replace the paradigm of a computer on every desktop by ser-ver-based computing, making thin clients on the desktop as cheap as possible and concen-trating resources on the server in striking similarity to the old mainframe-terminal model.Microsoft, Sun, and the Open Source community offer different products for similar goals.

����������� ����

One of the benefits of the rising Open Source movement is that Microsoft has responded inseveral ways, one of them being its ‘shared source’ program, while not the same thing,gives customers access to some source code. The Chinese government has especially push-ed its Red Flag Linux, fearing CIA and NSA may have planted backdoors to leak nationalsecrets to the Pentagon. Microsoft will allow governments to examine the Windows sourcecode.

The source code is the form in which computer programs are written and maintained. Thesource is usually compiled into a executable programs, which are hard to read. WhetherOpen or Closed Source, customers should ask for the source code of their software, al-though this is less important for commodity software like Office Suites. For custom-made

software, this is not unusual. Alternatively, companies use software escrow, depositingsource code with a trusted party who will release it if the supplier should go bankrupt.

Because executables are tied to a specific processor and Operating System, having sourcecode provides the opportunity to port a program to a different computer. Even when troub-lesome, porting is easier than rewriting.

Possession of source code gives the customer the chance to fix bugs in their software ratherthan having to wait for a solution from the vendor, or to outsource maintenance to a thirdparty. The same reasoning applies for extending software with new functionality. In my ex-perience, companies that depend on their vendor to fix important bugs in their softwarerarely get the service they need even after waiting for several months (all a consumer evergets is a web site to download a patch from). In most cases, the customer searches theInternet for others with the same problem who might have found a workaround.

Even without changing it, the ability to study source code for your software is very impor-tant for e.g. the airline industry, where a bug in aircraft software can kill hundreds ofpeople or an exploit may harm national security. Given that documentation is generallyimprecise and incomplete, programmers like to inspect source code to understand thelibraries and other software they write to.

����������� ���� ������ �����������������

"!$#&%('*),+.-/#102'3!

Open source doesn't just mean access to the source code. The distribution terms of open-source software must comply with the following criteria:

46587 %(9*9:%(9;),0=<�#&%10?>*+@#A0B'3!

The license shall not restrict any party from selling or giving away the software as a compo-nent of an aggregate software distribution containing programs from several differentsources. The license shall not require a royalty or other fee for such sale.

C,5;D ',+;%(-@9FEG';);9

The program must include source code, and must allow distribution in source code as wellas compiled form. Where some form of a product is not distributed with source code, theremust be a well-publicised means of obtaining the source code for no more than a rea-sonable reproduction cost preferably, downloading via the Internet without charge. Thesource code must be the preferred form in which a programmer would modify the pro-gram. Deliberately obfuscated source code is not allowed. Intermediate forms such as theoutput of a preprocessor or translator are not allowed.

H;5,I 93%10KJ.9;)MLN',%AOP<

The license must allow modifications and derived works, and must allow them to bedistributed under the same terms as the license of the original software.

Q 5 R!$#S9;T3%10K#VUW'YX3Z\[Y9�]^+@#A[Y'3%1_&< D ',+`%(-$9FEG';)a9

The license may restrict source-code from being distributed in modified form only if thelicense allows the distribution of "patch files" with the source code for the purpose of mo-difying the program at build time. The license must explicitly permit distribution of soft-ware built from modified source code. The license may require derived works to carry a dif-ferent name or version number from the original software.

������������ �������������������! "�#����$�&%('��)�*�����+���-,.�/��021"�

The license must not discriminate against any person or group of persons.

3 ���������� ����������4��5�6���! "�#����$��7�5'#859#�+��:<;=��9#'"��>���02�

The license must not restrict anyone from making use of the program in a specific field ofendeavour. For example, it may not restrict the program from being used in a business, orfrom being used for genetic research.

?��#�<@�A�)���B�0C��5�2�D��:FE�� G'����*'

The rights attached to the program must apply to all to whom the program is redistributedwithout the need for execution of an additional license by those parties.

H ��E�5 G'��2�G'JIK0��A�&�����&L='JM�1�'� �N:�� O�P�Q�R%F�/�"9�0� S�

The rights attached to the program must not depend on the program's being part of aparticular software distribution. If the program is extracted from that distribution and usedor distributed within the terms of the program's license, all parties to whom the program isredistributed should have the same rights as those that are granted in conjunction with theoriginal software distribution.

T ��E�5 G'��2��'JIK0��A�������FUV'#�$�W��5 S�YXZ�W[4'��\M(��:]�_^`���/'

The license must not place restrictions on other software that is distributed along with thelicensed software. For example, the license must not insist that all other programs distrib-uted on the same medium must be open-source software.

aSb �#E�� G'�����'�I+02�A�=L='DcY'� 4[�����85�� �d�ef�V'�0C�W�/��8

No provision of the license may be predicated on any individual technology or style ofinterface.

gihkjjmlKnZoqpkrtsZhujvnKh.wyx<j{z|l�n~}~hk�j!�

The title Open Source Software is applied to all software whose licenses meet the criteriamentioned above. There are many different licenses that qualify as Open Source, the mostcommon being the GNU General Public License, the GNU Library General Public License(a.k.a. lesser GPL), and the BSD license.

The BSD license allows derived works to be sold commercially, whereas the GPL forbids it;the LGPL protects the work itself, but allows it to be linked to any software. The provisionthat derived works must be distributed under the same terms as the original means that

you cannot in general combine code released under different Open Source licenses. OpenSource specifically does not preclude commercial production of software, especially cus-tom-made software.

I prefer to think of the Free Software movement as a subset of the Open Source commu-nity; the latter includes researchers and companies who produce open source software forpragmatic reasons, whereas the free software movement consists of communities of volun-teers centered around software projects. Their members may participate for social reasonslike enhancing their reputation among their peers or for shared ethical and political ideals.

While the open source community is mostly pragmatic, Richard M. Stallman of the FreeSoftware Foundation is the most vocal defender of the freedom of the users, meaning ‘freeas in free speech’ rather than ‘free as in free beer’. The hacker ideals seem to be influencedby the communist (free sharing of goods) and anarchist (shunning authority and hierar-chy) movements, but this does not hinder capitalists like IBM making profits from OpenSource. In ‘the hacker ethics’ relative outsider Pekka Himanen analyses the motivation ofthese volunteer programmers philosophically.

The free software movement also contains the users, somewhat organised around UserGroups, web sites, mailing lists, newsgroups and the like to provide mutual support. Itforms the unifying core of the broader community, and a vocal lobby against pressure fromsources like the BSA to undermine user rights. It also acts as a watchdog to guard againstparties trying to close the source code and fragmenting Linux like Unix was. The movementand the open source companies probably benefit from each other’s activities.

This paper has been focusing on Open Source Operating Systems like Linux, GNU Hurd, AT& T Plan 9, the *BSD family (which includes Apple Darwin, the core of OS X) etc. versusMicrosoft Windows because kernels constitute very complex projects in themselves and forunwillingness to discuss a large number of software products. An Operating System actslike the flag of the community (the Windows logo being a flag), but most important it pro-vides the ecosystem in which other software competes for survival. This is because the OSis the Application Programming Interface to which the applications are written so that theycan be compiled (with a little luck and hacking) on any Unix-like OS.

The Win32 API is incompatible except for the C library, so 32-bit versions of Windows arecompatible amongst each other, with OS/2 a distant cousin. A Windows-compatible OpenSource ReactOS is slowly being developed, which benefits from the efforts of creating theWine library to run Windows binaries on Linux. Although there was an active sharewarescene for MS-DOS, Open Source applications written for Windows are rare. A few havebeen ported from the Unices and the Cygwin project is at the center of this effort.

For the enterprise I see no compelling reasons against installing a couple of useful OpenSource tools where they are suitable, but if Open Source and/or Open Standards are to becompany policy, then a Unix-like OS should be chosen; or rather, I fail to see any reason forremoving all Open Source software from a Unix™ system. However, given the preponder-ance of Windows on desktops, a mix of operating systems is probably the best choice for anenterprise, whereas for small companies that could lead to higher costs.

����� ������� �������������������

The first impression is sometimes that Open Source is but a fashionable label for a bunchof amateur programmer’s utilities or else that it defies the laws of economics. Whenreading ‘The Cathedral and the Bazaar’ by Eric S. Raymond, the conclusion is that OpenSource is some magic formula, but that the closed minds are trying to do it backwards;indeed that the laws of economics are the most in need of correction. The near-zeromarginal costs of software distribution are of course instrumental in making a kind ofcommunism succeed where it provided insufficient incentive for digging mines or tillingthe soil.

The zeal with which Open Source fans are spreading their gospel is far more effective thanmillions of dollars spent on advertising campaigns, although the overzealous can have acontrary effect. Professional advertisers only dream of harnessing a similar effect.

The Open Source tradition its rooted in the academic tradition of publishing the results ofscientific research and basic technology. Every researcher profits from his or her prede-cessors’ work yet making but incremental progress. Only the largest companies can affordbasic science, the rest is application of common knowledge. The pressure to reduce prod-uct development times leaves little time for core research. As hardware and software growmore complex, the number of newly developed products like programming languages andoperating systems decreases. Computer manufacturers spend a little effort on supportingLinux on their wares because the cost of maintaining your own OS has become prohibitive.

Computer software only commands a good price for a limited period after its release likemotion pictures, which are sold at lower rates to tv stations and on video tapes or discs.The need for support in the form of updates and bug fixes drives up the maintenance costfor the vendor who is rewarded by the sale of the next version, but releasing a new versionevery year increases the number of programs needing support. A satisfied the customerwon’t buy the next version, unless it offers very substantial improvements, but a dissatis-fied customer will buy a competitor’s product. Microsoft has found an effective antidote inbundling a copy of DOS or Windows with every PC sold, so a middle-aged person could bynow own half a dozen OS licenses but just one PC.

Open Source companies have to be creative in funding their development costs. One ans-wer is to make your money from services and support, and these already represent themain source of income for firms like IBM, Sun, and HP. A related trend is to (nearly) giveaway hardware like mobile phones, ink jet printers, xerox copiers, or X-ray cameras andearn by selling phone calls, ink, paper, or film respectively. Most software vendors havesomething to give away, like web browsers to make more money on servers. A great advan-tage of giving away your product is the ability of gaining market share, as exemplified byMySQL that effectively enlisted more users than Oracle, despite the fact that Oracle had asuperior product and seemed on its way to weed out its competitors.

Whereas the speedup of computer hardware has been a hallmark of progress, softwaredevelopment methodologies have shown little improvement after ca. 1970. As the marketgrew but productivity stagnated despite the growing programmer workforce, the softwarecrisis entered the scene. The introduction of compatible computer families and shrink-wrapped software suites helped to reduce the amount of software needed, but the produc-tivity problem was not solved.

It was observed that adding more programmers to a team produces results more slowlyrather than faster, because of the extra coordination required. Corporate programmingprojects now tend to assign separate roles to different people, like functional design, tech-nical design, implementation, documentation, testing, coordination, and maintenance.Projects are supposed to complete each phase before entering the next. Management plansand monitors their fulfillment, especially the deadlines.

In practice, complex projects for bureaucratic organisations may never complete the func-tional design stage. Some 70 % of projects are never completed or never put into actualuse. No amount of quality control will catches all bugs, so a significant amount of time andeffort is spent of fixing problems after the software is released. Programmers hurrying tomeet a deadline are a huge cause of bugs.

Open Source programmers, as reported by E.S. Raymond, find that replacing heavily man-aged closed-source development with a small informal project team makes them moreefficient. Such anarchy could scare off corporate people; they want someone to be respon-sible when a deadline is missed, someone to call to when a problem occurs, someone tosue when they fail to solve the problem. In reality, going to court rarely solves the problem.

Open Source gives you no guarantee, but usually good quality. Writing software fast andsloppy increases the monetary reward. Without such an incentive, Open Source enhancesthe status of good programmers and mediocre ones are more likely to drop out.

In practice the gravity of the problem is much reduced by the Linux Distributors. Thesecollect software with a good reputation, compile them into an easier-to-use package, dosome beta-testing and fix problems when users report them.

Open Source development avoids the problems of coordinating a large team by creating asmall core team surrounded by a large user community, consisting largely of system ad-ministrators. These act mostly as testers. and since they possess the source code, they mayeven find the cause of a problem and submit a patch for it. The crucial difference is thathaving an army of contributors helps finding even the trickiest of bugs without the ineffi-ciency of managing a large team.

The direct contact between user and developer coupled with the frequent release of in-cremental updates speeds up development greatly in the most useful direction. Well-writt-en code can stand some redesign when the original does not quite work out. Recently, theextreme programming (XP) development model has been proposed, which bears somesimilarities to OSS practices.

Code-reuse is often preached, but less often practiced, in part because of what is called the‘Not Invented Here’ syndrome. Open Source developers practice reuse to a great extent. Al-though there are competing groups working on similar projects, the duplication of effort isless than in the closed-source world and the best products tend to be quickly identified.Although Open Source licenses allow for projects to fork, the social standards of the com-munity strongly oppose it, and it does not happen often in practice.

Closed Source software production really only makes sense for a company that makes itsrevenue from licensing costs. Most software is still being produced within the organisationthat uses it. Closing the source does not help selling tailor-made software. As a rule, givingaway source code does not make you any poorer, but it potentially keeps you from collec-ting a license fee. An organisation that is not primarily an IT company should not need toparticipate in the development of a product like Linux, but having a few of its programmers

participating in an OSS application may be more cost-effective than developing it entirelyin-house.

����������� ����������������������� ���"!#!��$�&%

The ‘Halloween paper’ leaked from a Microsoft employee, comparing Linux with Windows,found that for all the differences in method, the results are often comparable.

- Is Open Source more innovative? The progress of the Linux kernel suggests it. It contains the fruit of many academic re-search projects, manufacturers have contributed some unique techniques, the FBI andNSA have brought security enhancements, etc. The BSDs seen to move more slowly, buthave a little better stability. What OSS projects are especially good at is preserving and im-proving old software. The fact that Microsoft takes five whole years to produce a successorto Windows XP illustrates how lack of competition breeds freedom from innovation.

- Does Open Source deliver faster? Open Source development lacks bureaucratic procedures to slow development down. De-velopers can often be reached personally by electronic mail. One study found that Linuxkernel developers quickly produced a patch after a bug was brought to their attention,whereas Microsoft took a few weeks to release a fix, and Sun took a year and a half. For de-velopment of a new feature, Linux and Windows come closer. The absence of deadlinesmakes results unpredictable, though.

- Is Open Source cheaper? Apart from the consequences of changing an entire platform, single applications can beobtained virtually free of charge. There is an associated cost for testing, installation andmaintenance, but this is also the case for Closed Source software. Open Source essentiallymoves those burdens from the developer onto the shoulders of the Linux distribution ven-dor. Open Source in general drives operational costs down, but the transition costs can beprohibitive if it means large applications should be rewritten.

- Does Open Source have fewer bugs? Open Source gives developers nothing to hide behind. If a developer does not respond torequests, someone else will take his code and release a fix. Good software becomes knownand included into distributions. Distributors add their own fixes. One study found betterquality in Open Source than proprietary software. In my experience there is a strongercorrelation between the number of users and bugginess; in programs written for a singleuser bugs are reported and resolved much slower than in software used by millions.

Mission-critical software where failure can cost hundreds of lives or jeopardise nationalsecurity needs thorough testing and code review before deployment. The average quality ofLinux or Microsoft software is not sufficient. NASA employs the most rigorous softwarequality control in the business, but still space shuttles fail.

- Is Open Source more secure? The security community is wont to decry ‘security through obscurity’ practices and arguefor openness. Microsoft argues that publishing exploits will help them being exploited be-fore most of the users have installed patches. Open Source aids both finding exploits and

creating fixes. Microsoft software has some fundamental problems that cannot be patchedwithout extensive redesign. In the past Microsoft has given priority to feature-richness anduser-friendliness and software quality; in 2003 they have changed their development meth-od to pay structural attention to security issues, so far with less than spectacular success.Linux is not necessary very secure, but a good system administrator can make a difference;there are a few distributions with enhanced security.

- Is Open Source more compatible? Yes. Microsoft is often seen as reducing compatibility to prevent competitors from snat-ching some of its market share, while others try to make their products more interoperablewith the leader. Where Unix vendors failed to achieve standardisation, the Free SoftwareFoundation created utilities that were portable on all relevant platforms (thanks to the Cyg-win project many of these are also available for Windows), to the extent that there are nowthree Unix streams: System V, BSD, and GNU. Interoperability is a great selling point forOpen Source.

- Is Open Source more standardised? Open Source implements many standards like POSIX and the Internet RFCs, focussing oninteroperability and compatibility. If Linux were certified as compatible with all the stand-ards of The Open Group, it would earn the right to be called Unix. (remember that GNU isan acronym for GNU’s Not Unix) The complex standards of ISO networking have hinderedtheir implementation. The ad-hoc nature of Open Source does not square too well withparticipating in endless bureaucratic standardisation committees. The standards-compli-ance of OSS might be a bit of a myth. The modest Linux Standards Base project is aimed atgetting the various Linux distributions to the point where binary programs can be installedand run irrespective of the vendor.

- Is Linux ready for the desktop? In many ways Linux was ready for the desktop with the release of version 1.0, as well as forthe server, and with version 2.6 it is suitable for enterprise computing and Microsoft hasbeen lagging behind.

Linux was created as a Unix-like OS and quite successful in a Unix shop. The question isoften phrased to mean whether Linux is a better Windows than Microsoft Windows. Thereare a number of projects which emulate popular Microsoft products. For instance, Wineseeks to emulate Windows on Linux, and ReactOS is developing a complete OS. These areamong the least successful of Open Source projects, because Microsoft makes this goalparticularly hard. StarOffice / OpenOffice is a good office suite, if you can live with lessthan 100 % compatibility with the market leader. Samba on Unix can replace a Windowsserver and do a better job than the original.

- Does using Open Source pose legal problems? Disclaimer: I Am Not A Lawyer. The legal situation in the United States is more difficultthan here in the European Union. The Open Source licenses really only create a problem ifyou sell closed-source software and to include open source code in your product; youshould instruct your programmers. I doubt if you will be punished for using source codefrom someone who falsely claimed to have written it himself. It used not to be possible topatent computer software by itself, but it may become so. But then, using closed sourcesoftware also carries its own legal problems.

- Is Open Source always better?

While anectodal evidence suggests that Open Source / Free Software offers inherent ad-vantages, OS represents a small fraction of all software in use, but in some cases highlysuccessful; on the other hand, only a small fraction of all proprietary software achieveshigh sales. This paper is somewhat biased by pitting one shining example of Open Sourcesoftware against a high-profile but problematic example of proprietary software.

������������ ����

The nature of Open Source Software does not preclude its deployment in the Enterprise.Consideration of cost and quality suggests that OSS software may be the preferred choicein some cases, especially for middleware, as the complex and diverse needs of a large en-terprise are not met by any single vendor. Use Windows, Apple, IBM, Unix, Linux whereverthey serve you best, but balance Java, that is supported by many vendors, against .NET, thatcould lock your entire infrastructure onto Microsoft.

Open Source makes software development cheaper by separating it from quality control,support, etc., while giving you more choice in obtaining such services. Rather than buil-ding an ICT infrastructure from untested source code, enterprises will be better off pur-chasing solutions and/or services from specialised suppliers.

Open Source is no magic bullet. For the enterprise, the difference between Linux and Win-dows is less of an issue as the difference between monopoly and choice. As OS adoption isgrowing and it may be harder, but not impossible to make money from OS.

����������� ��� � �

J. Baten, Linux in het bedrijf, Academic Service, Schoonhoven, NL, 2000P. Himanen, The Hacker Ethics, Random House, New York, NY, 2002E.S. Raymond, The Cathedral and the Bazaar, O'Reilly & Associates, Sebastopol, CA, 2001A.S. Tanenbaum, Modern Operating Systems, Prentice-Hall, Englewood Cliffs, NJ, 1992D.A. Wheeler, Why Open Source Software / Free Software (OSS/FS)? Look at the Numbers,http://www.dwheeler.com/oss_fs_why.html